repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
julienchastang/unidata-python-workshop
notebooks/Time_Series/Basic Time Series Plotting.ipynb
mit
from siphon.simplewebservice.ndbc import NDBC data_types = NDBC.buoy_data_types('46042') print(data_types) """ Explanation: <a name="top"></a> <div style="width:1000 px"> <div style="float:right; width:98 px; height:98px;"> <img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;"> </div> <h1>Basic Time Series Plotting</h1> <h3>Unidata Python Workshop</h3> <div style="clear:both"></div> </div> <hr style="height:2px;"> <div style="float:right; width:250 px"><img src="http://matplotlib.org/_images/date_demo.png" alt="METAR" style="height: 300px;"></div> Overview: Teaching: 45 minutes Exercises: 30 minutes Questions How can we obtain buoy data from the NDBC? How are plots created in Python? What features does Matplotlib have for improving our time series plots? How can multiple y-axes be used in a single plot? Objectives <a href="#loaddata">Obtaining data</a> <a href="#basictimeseries">Basic timeseries plotting</a> <a href="#multiy">Multiple y-axes</a> <a name="loaddata"></a> Obtaining Data To learn about time series analysis, we first need to find some data and get it into Python. In this case we're going to use data from the National Data Buoy Center. We'll use the pandas library for our data subset and manipulation operations after obtaining the data with siphon. Each buoy has many types of data availabe, you can read all about it in the NDBC Web Data Guide. There is a mechanism in siphon to see which data types are available for a given buoy. End of explanation """ df = NDBC.realtime_observations('46042') df.tail() """ Explanation: In this case, we'll just stick with the standard meteorological data. The "realtime" data from NDBC contains approximately 45 days of data from each buoy. We'll retreive that record for buoy 51002 and then do some cleaning of the data. End of explanation """ df = df.dropna(axis='columns', how='all') df.head() """ Explanation: Let's get rid of the columns with all missing data. We could use the drop method and manually name all of the columns, but that would require us to know which are all NaN and that sounds like manual labor - something that programmers hate. Pandas has the dropna method that allows us to drop rows or columns where any or all values are NaN. In this case, let's drop all columns with all NaN values. End of explanation """ # Your code goes here # supl_obs = """ Explanation: <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Use the realtime_observations method to retreive supplemental data for buoy 41002. **Note** assign the data to something other that df or you'll have to rerun the data download cell above. We suggest using the name supl_obs.</li> </ul> </div> End of explanation """ # %load solutions/get_obs.py """ Explanation: Solution End of explanation """ import pandas as pd idx = df.time >= (pd.Timestamp.utcnow() - pd.Timedelta(days=7)) df = df[idx] df.head() """ Explanation: Finally, we need to trim down the data. The file contains 45 days worth of observations. Let's look at the last week's worth of data. End of explanation """ df.reset_index(drop=True, inplace=True) df.head() """ Explanation: We're almost ready, but now the index column is not that meaningful. It starts at a non-zero row, which is fine with our initial file, but let's re-zero the index so we have a nice clean data frame to start with. End of explanation """ # Convention for import of the pyplot interface import matplotlib.pyplot as plt # Set-up to have matplotlib use its support for notebook inline plots %matplotlib inline """ Explanation: <a href="#top">Top</a> <hr style="height:2px;"> <a name="basictimeseries"></a> Basic Timeseries Plotting Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. We're going to learn the basics of creating timeseries plots with matplotlib by plotting buoy wind, gust, temperature, and pressure data. End of explanation """ plt.rc('font', size=12) fig, ax = plt.subplots(figsize=(10, 6)) # Specify how our lines should look ax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed') # Same as above ax.set_xlabel('Time') ax.set_ylabel('Speed (m/s)') ax.set_title('Buoy Wind Data') ax.grid(True) ax.legend(loc='upper left'); """ Explanation: We'll start by plotting the windspeed observations from the buoy. End of explanation """ # Helpers to format and locate ticks for dates from matplotlib.dates import DateFormatter, DayLocator # Set the x-axis to do major ticks on the days and label them like '07/20' ax.xaxis.set_major_locator(DayLocator()) ax.xaxis.set_major_formatter(DateFormatter('%m/%d')) fig """ Explanation: Our x axis labels look a little crowded - let's try only labeling each day in our time series. End of explanation """ # Use linestyle keyword to style our plot ax.plot(df.time, df.wind_gust, color='tab:olive', linestyle='--', label='Wind Gust') # Redisplay the legend to show our new wind gust line ax.legend(loc='upper left') fig """ Explanation: Now we can add wind gust speeds to the same plot as a dashed yellow line. End of explanation """ # Your code goes here """ Explanation: <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Create your own figure and axes (<code>myfig, myax = plt.subplots(figsize=(10, 6))</code>) which plots temperature.</li> <li>Change the x-axis major tick labels to display the shortened month and date (i.e. 'Sep DD' where DD is the day number). Look at the <a href="https://docs.python.org/3.6/library/datetime.html#strftime-and-strptime-behavior"> table of formatters</a> for help. <li>Make sure you include a legend and labels!</li> <li><b>BONUS:</b> try changing the <code>linestyle</code>, e.g., a blue dashed line.</li> </ul> </div> End of explanation """ # %load solutions/basic_plot.py """ Explanation: Solution <div class="alert alert-info"> <b>Tip</b>: If your figure goes sideways as you try multiple things, try running the notebook up to this point again by using the Cell -> Run All Above option in the menu bar. </div> End of explanation """ # plot pressure data on same figure ax.plot(df.time, df.pressure, color='black', label='Pressure') ax.set_ylabel('Pressure') ax.legend(loc='upper left') fig """ Explanation: <a href="#top">Top</a> <hr style="height:2px;"> <a name="multiy"></a> Multiple y-axes What if we wanted to plot another variable in vastly different units on our plot? <br/> Let's return to our wind data plot and add pressure. End of explanation """ fig, ax = plt.subplots(figsize=(10, 6)) axb = ax.twinx() # Same as above ax.set_xlabel('Time') ax.set_ylabel('Speed (m/s)') ax.set_title('Buoy Data') ax.grid(True) # Plotting on the first y-axis ax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed') ax.plot(df.time, df.wind_gust, color='tab:olive', linestyle='--', label='Wind Gust') ax.legend(loc='upper left'); # Plotting on the second y-axis axb.set_ylabel('Pressure (hPa)') axb.plot(df.time, df.pressure, color='black', label='pressure') ax.xaxis.set_major_locator(DayLocator()) ax.xaxis.set_major_formatter(DateFormatter('%b %d')) """ Explanation: That is less than ideal. We can't see detail in the data profiles! We can create a twin of the x-axis and have a secondary y-axis on the right side of the plot. We'll create a totally new figure here. End of explanation """ fig, ax = plt.subplots(figsize=(10, 6)) axb = ax.twinx() # Same as above ax.set_xlabel('Time') ax.set_ylabel('Speed (m/s)') ax.set_title('Buoy 41056 Wind Data') ax.grid(True) # Plotting on the first y-axis ax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed') ax.plot(df.time, df.wind_gust, color='tab:olive', linestyle='--', label='Wind Gust') # Plotting on the second y-axis axb.set_ylabel('Pressure (hPa)') axb.plot(df.time, df.pressure, color='black', label='pressure') ax.xaxis.set_major_locator(DayLocator()) ax.xaxis.set_major_formatter(DateFormatter('%b %d')) # Handling of getting lines and labels from all axes for a single legend lines, labels = ax.get_legend_handles_labels() lines2, labels2 = axb.get_legend_handles_labels() axb.legend(lines + lines2, labels + labels2, loc='upper left'); """ Explanation: We're closer, but the data are plotting over the legend and not included in the legend. That's because the legend is associated with our primary y-axis. We need to append that data from the second y-axis. End of explanation """ # Your code goes here """ Explanation: <div class="alert alert-success"> <b>EXERCISE</b>: Create your own plot that has the following elements: <ul> <li>A blue line representing the wave height measurements.</li> <li>A green line representing wind speed on a secondary y-axis</li> <li>Proper labels/title.</li> <li>**Bonus**: Make the wave height data plot as points only with no line. Look at the documentation for the linestyle and marker arguments.</li> </ul> </div> End of explanation """ # %load solutions/adv_plot.py """ Explanation: Solution End of explanation """
datactive/bigbang
examples/organizations/Using Domain Entropy to Identify Organizations.ipynb
mit
arx = Archive("httpbisa",mbox=True) """ Explanation: Preparing the data Open a mailing list archive. End of explanation """ email_regex = r'[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+' domain_regex = r'[@]([a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+)$' email = re.search(email_regex, "Gerald Oskoboiny <gerald@w3.org>")[0] re.search(domain_regex, email)[1] """ Explanation: We will need to extract email addresses and email domains from the From field of the emails. End of explanation """ arx.data['From'].apply(utils.extract_domain).value_counts().head(15) """ Explanation: We can break down all the emails sent to the mailing list by domain. End of explanation """ froms = arx.get_personal_headers() froms.head(10) """ Explanation: We can see that there are some generic email domains (gmail.com), some personal email domains (mnot.net), and some corporate email domains (google.com, apple.com). If we are interested in organizations, we need to identify domains that represent collections of people representing a single organization. How can we find that out? Defining Domain Entropy End of explanation """ utils.domain_entropy('mnot.net', froms) utils.domain_entropy('apple.com', froms) utils.domain_entropy('gmail.com', froms) """ Explanation: In order to evaluate the extent to which a domain represents (a) and individual, (b) a set of organized individuals, or (c) a large set of unorganized individuals, we will measure the concentration of the distribution of email addresses per domain. If $n_D$ is the number of messages from domain $D$, and $n_e$ is the number of messages from email address $e$, then we will compute the information entropy of the frequency of $e$ in $D$. $$H(D) = - \sum_{e \in D} \frac{n_e}{n_D} \log \frac{n_e}{n_D}$$ Using metric, we can now see that Mark Nottingham's personal domain mnot.net has lower domain entropy than the company Apple's domain apple.com, which has lower domain entropy than the generic personal domain gmail.com. End of explanation """ domains = froms['domain'].unique() domain_entropies = pd.Series(index= domains, data = [utils.domain_entropy(dom, froms) for dom in domains]) domain_entropies = domain_entropies.sort_values(ascending=False) import numpy as np y_limit = 121 dom_ent = domain_entropies.head(y_limit) fig, ax = plt.subplots(figsize=(8,8)) domains = dom_ent.index y_pos = np.arange(len(domains)) ax.barh( y_pos, dom_ent.values ) ax.set_yticks(y_pos[::3]) ax.set_yticklabels(domains[::3]) ax.invert_yaxis() # labels read top-to-bottom ax.set_xlabel('Domain Entropy') ax.set_title('Within Mailing List Domain Entropy') """ Explanation: Using the metric We can compute the domain entropy for all domains and plot these values. End of explanation """ domain_entropies.head(20) """ Explanation: The most surprising thing about this metric is that the generic email domain gmail.com has less entropy than the corporate domain google.com. Why is that? End of explanation """ froms[froms['domain'] == 'gmail.com']['email'].value_counts().head(10) froms[froms['domain'] == 'gmail.com']['email'].value_counts().plot() froms[froms['domain'] == 'google.com']['email'].value_counts().head(10) froms[froms['domain'] == 'google.com']['email'].value_counts().plot() """ Explanation: It looks like the gmail.com domain is dominated by a few major individuals, whereas the google.com representation is smaller overall, and more evenly distributed across their team members. End of explanation """
boompieman/iim_project
Poem_Segmentation_Demo_Python/khan_segmentation.ipynb
gpl-3.0
# Import required libraries import os import csv import segeval as se import numpy as np import matplotlib.pyplot as plt import itertools as it from collections import defaultdict from decimal import Decimal from hcluster import linkage, dendrogram, fcluster """ Explanation: An initial study of topical poetry segmentation This work investigates topical segmentation of poetry to better understand its interpretation humans. Nine segmentations of the poem titled Kubla Khan (Coleridge, 1816) were collected; a small number, but enough to inform a future larger study and to obtain feedback upon the methodologies used. Chris Fournier. 2013. An initial study of topical poetry segmentation. Proceedings of the Second Workshop on Computational Linguistics for Literature, pp. 47-51. Association for Computational Linguistics, Stroudsburg, PA, USA. End of explanation """ # Document to analyse item_name = u'kublakhan' number_of_lines = 54 # Ordered list of coders (and numeric list of coders) used to relate # numbered cluster coders to other graphs coders = ['AWRAXV1RIYR0M', 'A23S6QOSZH9TMT', 'A21IFZJ0EDKM4E', 'AO3XB5I5QNNUI', 'A3RLCGRXA34GC0', 'A21SF3IKIZB0VN', 'APXNY64HXO08K', 'AM155T4U3RE1A', 'A2YBGZ2H2KSO5T'] labels = ['%i' % i for i in range(0, len(coders))] # Load segmentation dataset filepath = os.path.join('data', 'kubla_khan_fournier_2013.json') dataset = se.input_linear_mass_json(filepath) # Load labels segment_labels = dict() filepath = os.path.join('data', 'kubla_khan_fournier_2013', 'labels.csv') with open(filepath) as csv_file: reader = csv.reader(csv_file, delimiter=',') for row in reader: segment_labels[row[0]] = [item.strip() for item in row[1:]] """ Explanation: Load and define data Nine Master Tukers were recruited using from the United States and were asked to segment the poem into topically contiguous segments at the line level. They were also asked to produce one-sentence summaries of each segment. Data The segmentations themselves were saved within the file kubla_khan_fournier_2013.json The principle researcher read these summaries and attempted to label the type of segments that the Turkers produced which is saved as labels.csv. To later perform comparisons in an order that hcluster expects, an ordered list of coders is defined herein named coders. End of explanation """ # Compute boundaries boundaries = dict([(key, len(mass) - 1) for key, mass in dataset[item_name].items()]) coder_boundaries = [boundaries[coder] for coder in coders] # Compute similarities (1-B) similarities = se.boundary_similarity(dataset, one_minus=True) # Expand segment labels using the mass of each segment to create # a one to one mapping between line and segment label expanded_segment_labels = defaultdict(list) for coder in coders: masses = dataset[item_name][coder] coder_segment_labels = segment_labels[coder] expanded_segment = list() for mass, coder_segment_label in zip(masses, coder_segment_labels): expanded_segment.extend(list([coder_segment_label]) * mass) expanded_segment_labels[coder] = expanded_segment # Define label similarity function def jaccard(a, b): return float(len(a & b)) / float(len(a | b)) # Compute overall label Jaccard similarities per position total_similarities = list() row_similarities = list() for i in xrange(0, number_of_lines): parts = list() for coder in coders: parts.append(set(expanded_segment_labels[coder][i].split('/'))) part_combinations = it.combinations(parts, 2) position_similarities = [jaccard(a, b) for a, b in part_combinations] total_similarities.extend(position_similarities) row_similarities.append(position_similarities) """ Explanation: Compute descriptive statistics Two descriptive statistics are used to analyse the codings that the coders produced: Boundary Similarity (B) to analyse the boundaries placed by segmenters; and Jaccard Similarity (J) of the labels describing the segments (where the labels for each segment are placed upon each line before computing similarity). Boundary SImilarity is described in: Chris Fournier. 2013. Evaluating Text Segmentation using Boundary Edit Distance. Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA. End of explanation """ def autolabel(rects, rotation=0, xpad=0): # attach some text labels for rect in rects: height = rect.get_height() ax.text(rect.get_x()+rect.get_width()/2.+xpad, 1.05*height, '%.2f'%float(height), ha='center', va='bottom', rotation=rotation) """ Explanation: Define helper functions Functions that aid in graphing. End of explanation """ similarity_values = [float(value) for value in similarities.values()] mean_b = np.mean(similarity_values) std_b = np.std(similarity_values) mean_j = np.mean(total_similarities) std_j = np.std(total_similarities) print 'Mean B \t\t {0:.4f} +/- {1:.4f}, n={2}'.format(mean_b, std_b, len(similarity_values)) print 'Mean J \t\t {0:.4f} +/- {1:.4f}, n={2}'.format(mean_j, std_j, len(total_similarities)) print 'Fleiss\' Pi \t {0:.4f}'.format(se.fleiss_pi_linear(dataset)) """ Explanation: Overall analysis End of explanation """ # Order distances for clustering coder_combinations = [list(a) for a in it.combinations(coders, 2)] for coder_combination in coder_combinations: coder_combinations.reverse() keys = list() for a in coder_combinations: a = list(a) key = ','.join([item_name] + a) if key not in similarities: a.reverse() key = ','.join([item_name] + a) keys.append(key) distances = [similarities[key] for key in keys] # Cluster aglomerative_clusters = linkage(distances, method='complete') dendro = dendrogram(aglomerative_clusters, labels=labels) plt.ylabel('Mean Distance (1-B)') plt.xlabel('Coder') plt.show(dendro) """ Explanation: Subset analysis The overall statistics show that the 9 coders have low agreement regardless of the metric used. Cluster segmentations by boundary similarity Hypothesis: Subsets of the coders may agree better with eachother. To explore this hypothesis, the similarities of the boundaries placed within each segmentation (1-B) were used as a distance function to perform hierarchical agglomerative clustering. Each cluster can then be analyzed. End of explanation """ cluster_members = { '0,2' : [coders[0], coders[2]], '1,0,2' : [coders[1], coders[0], coders[2]], '4,7' : [coders[4], coders[7]], '1,0,2,4,7' : [coders[1], coders[0], coders[2], coders[4], coders[7]], '6,8' : [coders[6], coders[8]], '5,6,8' : [coders[5], coders[6], coders[8]], '3,5,6,8' : [coders[3], coders[5], coders[6], coders[8]] } cluster_pi = dict() cluster_b = dict() cluster_j = dict() for cluster, members in cluster_members.items(): data = {coder : dataset[item_name][coder] for coder in members} dataset_subset = se.Dataset({item_name : data}) cluster_b[cluster] = [float(value) for value in se.boundary_similarity(dataset_subset, n_t=2).values()] cluster_pi[cluster] = float(se.fleiss_pi_linear(dataset_subset, n_t=2)) position_j = list() for i in xrange(0, number_of_lines): parts = list() for coder in members: parts.append(set(expanded_segment_labels[coder][i].split('/'))) part_combinations = it.combinations(parts, 2) position_similarities = [jaccard(a, b) for a, b in part_combinations] position_j.extend(position_similarities) cluster_j[cluster] = position_j print 'Cluster\t\tPi\tB\t\t\tJ' for cluster in cluster_members.keys(): print '{0}\t{1:.4f}\t{2:.4f} +/- {3:.4f}, n={4}\t{5:.4f} +/- {6:.4f}, n={7}'.format(cluster if len(cluster) > 7 else cluster+'\t', np.mean(cluster_pi[cluster]), np.mean(cluster_b[cluster]), np.std(cluster_b[cluster]), len(cluster_b[cluster]), np.mean(cluster_j[cluster]), np.std(cluster_j[cluster]), len(cluster_j[cluster])) """ Explanation: Compute statistics for each cluster Given the clusters produced above, let's calculate statistics for each cluster. End of explanation """ y = list() y2 = list() y2err = list() y3 = list() y3err = list() for cluster in cluster_members.keys(): y.append(float(cluster_pi[cluster])) y2.append(np.mean(cluster_b[cluster])) y2err.append(np.std(cluster_b[cluster])) y3.append(np.mean(cluster_j[cluster])) y3err.append(np.std(cluster_j[cluster])) ind = np.arange(len(cluster_members)) # the x locations for the groups width = 0.26 # the width of the bars fig = plt.figure() ax = fig.add_subplot(111) rects1 = ax.bar(ind, y, width, color='0.25', ecolor='k') rects2 = ax.bar(ind+width, y2, width, yerr=y2err, color='0.5', ecolor='k') rects3 = ax.bar(ind+width*2, y3, width, yerr=y3err, color='0.75', ecolor='k') # add some ax.set_ylabel('Cluster similarity') ax.set_xticks(ind + ((width * 3) / 2)) ax.set_xticklabels(labels) ax.set_xlim([-0.25,6.95]) ax.set_ylim([0,1]) ax.legend( (rects1[0], rects2[0], rects3[0]), ('$\kappa_{\mathrm{B}}$', 'E(B)', 'E(J)') ) autolabel(rects1, rotation=90, xpad=.03) autolabel(rects2, rotation=90, xpad=.03) autolabel(rects3, rotation=90, xpad=.03) """ Explanation: Plot mean similarities per cluster The mean boundary similarity (B) and mean Jaccard label similarity (J), with standard deviation, is shown below. End of explanation """ # Plot boundaries per coder y = coder_boundaries x = np.arange(len(y)) # Set up width = 0.75 fig = plt.figure() ax = fig.add_subplot(1,1,1) # Plot rects = ax.bar(x, y, width, color='0.75') # Add xticks ax.set_xticks(x + (width / 2)) ax.set_xticklabels([str(val) for val in labels]) # Draw mean lines xmin, xmax, ymean, ystd = -0.25, len(labels), np.mean(y), np.std(y) ax.plot([xmin, xmax], [ymean] * 2, color='k') # Draw mean ax.plot([xmin, xmax], [ymean + ystd] * 2, color='0.5') # Draw +std ax.plot([xmin, xmax], [ymean - ystd] * 2, color='0.5') # Draw -std # Add numbers to bars format_str='%d' fnc_value=int for rect in rects: height = rect.get_height() ax.text(rect.get_x() + rect.get_width() / 2., 1.05 * height, format_str%fnc_value(height), ha='center', va='bottom') # Format ax.set_xlim([-0.25, 9]) ax.set_ylim([0, 30]) ax.set_xlabel('Coder') ax.set_ylabel('Boundaries placed (quantity)') plt.show() """ Explanation: Coder analysis Having looked at subsets of coders, it would be informative to also analyze coder behaviour overall. Plot boundary placement frequency To visualize coder behaviour, this plot indicates the frequency at which various coders placed boundaries in this document. End of explanation """ # Create heat map y_sim = list() y_sim_err = list() for row_similarity in row_similarities: y_sim.append(np.mean(row_similarity)) y_sim_err.append(np.std(row_similarity)) # Plot mean label similarity labels = ['$%i$' % i for i in range(0, number_of_lines)] y = list(y_sim) x = range(0, number_of_lines) plt.errorbar(x, y, color='k', ) xlim([0, number_of_lines - 1]) ylim([0, 1.05]) plt.ylabel('Mean Label Jaccard Similarity') plt.xlabel('Line') plt.show() """ Explanation: Plot coder label similarity per line To visualize the areas of the poem which had the greatest agreement in terms of topic segment type, the Jaccard similarity per position between all coders was plotted. End of explanation """ position_frequency = [0] * (sum(dataset['kublakhan'].values()[0]) - 1) for segmentation in dataset['kublakhan'].values(): position = 0 for segment in segmentation[0:-1]: position += segment position_frequency[position] += 1 position_boundary_sim = [float(value) / 9 for value in position_frequency] # Create heat map y = position_frequency # Plot mean label similarity labels = ['$%i$' % i for i in range(0, number_of_lines)] x = range(0, number_of_lines - 1) plt.errorbar(x, y, color='k', ) xlim([0, 52.0]) ylim([0, 10]) plt.ylabel('Boundary Frequency') plt.xlabel('Line') plt.show() """ Explanation: Plot coder boundary frequency per line To visualize the areas of the poem which had the greatest number of boundaries placed by all coders, the boundary frequency per position for all coders was plotted. End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/machine_learning_in_the_enterprise/solutions/sdk_custom_xgboost.ipynb
apache-2.0
# import necessary libraries import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG """ Explanation: Migrating Custom XGBoost Model with Pre-built Training Container Learning Objectives Train a model. Upload a model. Make a batch and online predictions. Deploy a model. Introduction The dataset used for this tutorial is the Iris dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. Installation Install the latest version of Vertex SDK for Python. End of explanation """ ! pip3 install -U google-cloud-storage $USER_FLAG if os.getenv("IS_TESTING"): ! pip3 install --upgrade tensorflow $USER_FLAG """ Explanation: Install the latest GA version of google-cloud-storage library as well. End of explanation """ import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation """ PROJECT_ID = "<your-project>" # replace with your project ID if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID """ Explanation: Set up your Google Cloud project End of explanation """ REGION = "us-central1" """ Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about Vertex AI regions End of explanation """ from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") """ Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. End of explanation """ BUCKET_NAME = "gs://<your-bucket>" # replace bucket name if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP """ Explanation: Create a Cloud Storage bucket When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation """ ! gsutil mb -l $REGION $BUCKET_NAME """ Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation """ ! gsutil ls -al $BUCKET_NAME """ Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation """ import google.cloud.aiplatform as aip """ Explanation: Set up variables Next, set up some variables used in this notebook. Import libraries and define constants End of explanation """ aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME) """ Explanation: Initialize Vertex SDK for Python Initialize the Vertex SDK for Python for your project and corresponding bucket. End of explanation """ TRAIN_VERSION = "xgboost-cpu.1-1" DEPLOY_VERSION = "xgboost-cpu.1-1" TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION) DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION) """ Explanation: Set pre-built containers Set the pre-built Docker container image for training and prediction. For the latest list, see Pre-built containers for training. For the latest list, see Pre-built containers for prediction. End of explanation """ import os if os.getenv("IS_TESTING_TRAIN_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", TRAIN_COMPUTE) if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Deploy machine type", DEPLOY_COMPUTE) """ Explanation: Set machine type Next, set the machine type to use for training and prediction. Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction. machine type n1-standard: 3.75GB of memory per vCPU. n1-highmem: 6.5GB of memory per vCPU n1-highcpu: 0.9 GB of memory per vCPU vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ] Note: The following is not supported for training: standard: 2 vCPUs highcpu: 2, 4 and 8 vCPUs Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs. End of explanation """ # Make folder for Python training script ! rm -rf custom ! mkdir custom # Add package information ! touch custom/README.md setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0" ! echo "$setup_cfg" > custom/setup.cfg setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())" ! echo "$setup_py" > custom/setup.py pkg_info = "Metadata-Version: 1.0\n\nName: Iris tabular classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex" ! echo "$pkg_info" > custom/PKG-INFO # Make the training subfolder ! mkdir custom/trainer ! touch custom/trainer/__init__.py %%writefile custom/trainer/task.py # Single Instance Training for Iris import datetime import os import subprocess import sys import pandas as pd import xgboost as xgb import argparse parser = argparse.ArgumentParser() parser.add_argument('--model-dir', dest='model_dir', default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.') args = parser.parse_args() # Download data iris_data_filename = 'iris_data.csv' iris_target_filename = 'iris_target.csv' data_dir = 'gs://cloud-samples-data/ai-platform/iris' # gsutil outputs everything to stderr so we need to divert it to stdout. subprocess.check_call(['gsutil', 'cp', os.path.join(data_dir, iris_data_filename), iris_data_filename], stderr=sys.stdout) subprocess.check_call(['gsutil', 'cp', os.path.join(data_dir, iris_target_filename), iris_target_filename], stderr=sys.stdout) # Load data into pandas, then use `.values` to get NumPy arrays iris_data = pd.read_csv(iris_data_filename).values iris_target = pd.read_csv(iris_target_filename).values # Convert one-column 2D array into 1D array for use with XGBoost iris_target = iris_target.reshape((iris_target.size,)) # Load data into DMatrix object dtrain = xgb.DMatrix(iris_data, label=iris_target) # Train XGBoost model bst = xgb.train({}, dtrain, 20) # Export the classifier to a file model_filename = 'model.bst' bst.save_model(model_filename) # Upload the saved model file to Cloud Storage gcs_model_path = os.path.join(args.model_dir, model_filename) subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout) """ Explanation: Examine the training package Package layout Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout. PKG-INFO README.md setup.cfg setup.py trainer __init__.py task.py The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image. The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py). Package Assembly In the following cells, you will assemble the training package. End of explanation """ ! rm -f custom.tar custom.tar.gz ! tar cvf custom.tar custom ! gzip custom.tar ! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_iris.tar.gz """ Explanation: Store training script on your Cloud Storage bucket Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket. End of explanation """ # TODO # constructs a Custom Training Job using a Python script job = aip.CustomTrainingJob( display_name="iris_" + TIMESTAMP, script_path="custom/trainer/task.py", container_uri=TRAIN_IMAGE, requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"], ) print(job) """ Explanation: Train a model (training.create-python-pre-built-container) Create and run custom training job To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job. Create custom training job A custom training job is created with the CustomTrainingJob class, with the following parameters: display_name: The human readable name for the custom training job. container_uri: The training container image. requirements: Package requirements for the training container image (e.g., pandas). script_path: The relative path to the training script. End of explanation """ MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP) job.run( replica_count=1, machine_type=TRAIN_COMPUTE, base_output_dir=MODEL_DIR, sync=True ) MODEL_DIR = MODEL_DIR + "/model" model_path_to_deploy = MODEL_DIR """ Explanation: Run the custom training job Next, you run the custom job to start the training job by invoking the method run, with the following parameters: replica_count: The number of compute instances for training (replica_count = 1 is single node training). machine_type: The machine type for the compute instances. base_output_dir: The Cloud Storage location to write the model artifacts to. sync: Whether to block until completion of the job. End of explanation """ # TODO model = aip.Model.upload( display_name="iris_" + TIMESTAMP, artifact_uri=MODEL_DIR, serving_container_image_uri=DEPLOY_IMAGE, sync=False, ) model.wait() """ Explanation: The custom training job will take some time to complete. Upload the model (general.import-model) Next, upload your model to a Model resource using Model.upload() method, with the following parameters: display_name: The human readable name for the Model resource. artifact_uri: The Cloud Storage location of the trained model artifacts. serving_container_image_uri: The serving container image. sync: Whether to execute the upload asynchronously or synchronously. If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method. End of explanation """ INSTANCES = [[1.4, 1.3, 5.1, 2.8], [1.5, 1.2, 4.7, 2.4]] """ Explanation: Make batch predictions (predictions.batch-prediction) Make test items You will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction. End of explanation """ import tensorflow as tf gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl" with tf.io.gfile.GFile(gcs_input_uri, "w") as f: for i in INSTANCES: f.write(str(i) + "\n") ! gsutil cat $gcs_input_uri """ Explanation: Make the batch input file Now make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a list of the form: [ [ content_1], [content_2] ] content: The feature values of the test item as a list. End of explanation """ MIN_NODES = 1 MAX_NODES = 1 # TODO batch_predict_job = model.batch_predict( job_display_name="iris_" + TIMESTAMP, gcs_source=gcs_input_uri, gcs_destination_prefix=BUCKET_NAME, instances_format="jsonl", predictions_format="jsonl", model_parameters=None, machine_type=DEPLOY_COMPUTE, starting_replica_count=MIN_NODES, max_replica_count=MAX_NODES, sync=False, ) print(batch_predict_job) """ Explanation: Make the batch prediction request Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters: job_display_name: The human readable name for the batch prediction job. gcs_source: A list of one or more batch request input files. gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls. instances_format: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'. predictions_format: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'. machine_type: The type of machine to use for training. sync: If set to True, the call will block while waiting for the asynchronous batch job to complete. End of explanation """ import json bp_iter_outputs = batch_predict_job.iter_outputs() prediction_results = list() for blob in bp_iter_outputs: if blob.name.split("/")[-1].startswith("prediction"): prediction_results.append(blob.name) tags = list() for prediction_result in prediction_results: gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}" with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile: for line in gfile.readlines(): line = json.loads(line) print(line) break """ Explanation: Batch prediction request will take 25-30 mins to complete. Get the predictions Next, get the results from the completed batch prediction job. The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format: instance: The prediction request. prediction: The prediction response. End of explanation """ DEPLOYED_NAME = "iris-" + TIMESTAMP TRAFFIC_SPLIT = {"0": 100} MIN_NODES = 1 MAX_NODES = 1 # TODO endpoint = model.deploy( deployed_model_display_name=DEPLOYED_NAME, traffic_split=TRAFFIC_SPLIT, machine_type=DEPLOY_COMPUTE, min_replica_count=MIN_NODES, max_replica_count=MAX_NODES, ) """ Explanation: Make online predictions (predictions.deploy-model-api) Deploy the model Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters: deployed_model_display_name: A human readable name for the deployed model. traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic. If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100. machine_type: The type of machine to use for training. min_replica_count: The number of compute instances to initially provision. max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned. End of explanation """ INSTANCE = [1.4, 1.3, 5.1, 2.8] """ Explanation: Model deployment will take some time to complete. Make test item (predictions.online-prediction-automl) You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction. End of explanation """ instances_list = [INSTANCE] prediction = endpoint.predict(instances_list) print(prediction) """ Explanation: Make the prediction Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource. Request The format of each instance is: [feature_list] Since the predict() method can take multiple items (instances), send your single test item as a list of one test item. Response The response from the predict() call is a Python dictionary with the following entries: ids: The internal assigned unique identifiers for each prediction request. predictions: The predicted confidence, between 0 and 1, per class label. deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions. End of explanation """ endpoint.undeploy_all() """ Explanation: Undeploy the model When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model. End of explanation """
Arn-O/kadenze-deep-creative-apps
session-0/session-0.ipynb
apache-2.0
4*2 """ Explanation: Session 0: Preliminaries with Python/Notebook <p class="lead"> Parag K. Mital<br /> <a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning w/ Tensorflow</a><br /> <a href="https://www.kadenze.com/partners/kadenze-academy">Kadenze Academy</a><br /> <a href="https://twitter.com/hashtag/CADL">#CADL</a> </p> This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. <a name="learning-goals"></a> Learning Goals Install and run Jupyter Notebook with the Tensorflow library Learn to create a dataset of images using os.listdir and plt.imread Understand how images are represented when using float or uint8 Learn how to crop and resize images to a standard size. Table of Contents <!-- MarkdownTOC autolink=true autoanchor=true bracket=round --> Introduction Using Notebook Cells Kernel Importing Libraries Loading Data Structuring data as folders Using the os library to get data Loading an image RGB Image Representation Understanding data types and ranges (uint8, float32) Visualizing your data as images Image Manipulation Cropping images Resizing images Cropping/Resizing Images The Batch Dimension Conclusion <!-- /MarkdownTOC --> <a name="introduction"></a> Introduction This preliminary session will cover the basics of working with image data in Python, and creating an image dataset. Please make sure you are running at least Python 3.4 and have Tensorflow 0.9.0 or higher installed. If you are unsure of how to do this, please make sure you have followed the installation instructions. We'll also cover loading images from a directory, resizing and cropping images, and changing an image datatype from unsigned int to float32. If you feel comfortable with all of this, please feel free to skip straight to Session 1. Otherwise, launch jupyter notebook and make sure you are reading the session-0.ipynb file. <a name="using-notebook"></a> Using Notebook Make sure you have launched jupyter notebook and are reading the session-0.ipynb file. If you are unsure of how to do this, please make sure you follow the installation instructions. This will allow you to interact with the contents and run the code using an interactive python kernel! <a name="cells"></a> Cells After launching this notebook, try running/executing the next cell by pressing shift-enter on it. End of explanation """ import os """ Explanation: Now press 'a' or 'b' to create new cells. You can also use the toolbar to create new cells. You can also use the arrow keys to move up and down. <a name="kernel"></a> Kernel Note the numbers on each of the cells inside the brackets, after "running" the cell. These denote the current execution count of your python "kernel". Think of the kernel as another machine within your computer that understands Python and interprets what you write as code into executions that the processor can understand. <a name="importing-libraries"></a> Importing Libraries When you launch a new notebook, your kernel is a blank state. It only knows standard python syntax. Everything else is contained in additional python libraries that you have to explicitly "import" like so: End of explanation """ # Load the os library import os # Load the request module import urllib.request if not os.path.exists('img_align_celeba'): # Create a directory os.mkdir('img_align_celeba') # Now perform the following 10 times: for img_i in range(1, 11): # create a string using the current loop counter f = '000%03d.jpg' % img_i # and get the url with that string appended the end url = 'https://s3.amazonaws.com/cadl/celeb-align/' + f # We'll print this out to the console so we can see how far we've gone print(url, end='\r') # And now download the url to a location inside our new directory urllib.request.urlretrieve(url, os.path.join('img_align_celeba', f)) else: print('Celeb Net dataset already downloaded') """ Explanation: After exectuing this cell, your kernel will have access to everything inside the os library which is a common library for interacting with the operating system. We'll need to use the import statement for all of the libraries that we include. <a name="loading-data"></a> Loading Data Let's now move onto something more practical. We'll learn how to see what files are in a directory, and load any images inside that directory into a variable. <a name="structuring-data-as-folders"></a> Structuring data as folders With Deep Learning, we'll always need a dataset, or a collection of data. A lot of it. We're going to create our dataset by putting a bunch of images inside a directory. Then, whenever we want to load the dataset, we will tell python to find all the images inside the directory and load them. Python lets us very easily crawl through a directory and grab each file. Let's have a look at how to do this. <a name="using-the-os-library-to-get-data"></a> Using the os library to get data We'll practice with a very large dataset called Celeb Net. This dataset has about 200,000 images of celebrities. The researchers also provide a version of the dataset which has every single face cropped and aligned so that each face is in the middle! We'll be using this aligned dataset. To read more about the dataset or to download it, follow the link here: http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html For now, we're not going to be using the entire dataset but just a subset of it. Run the following cell which will download the first 10 images for you: End of explanation """ help(os.listdir) """ Explanation: Using the os package, we can list an entire directory. The documentation or docstring, says that listdir takes one parameter, path: End of explanation """ files = os.listdir('img_align_celeba') """ Explanation: This is the location of the directory we need to list. Let's try this with the directory of images we just downloaded: End of explanation """ [file_i for file_i in os.listdir('img_align_celeba') if '.jpg' in file_i] """ Explanation: We can also specify to include only certain files like so: End of explanation """ [file_i for file_i in os.listdir('img_align_celeba') if '.jpg' in file_i and '00000' in file_i] """ Explanation: or even: End of explanation """ [file_i for file_i in os.listdir('img_align_celeba') if '.jpg' in file_i or '.png' in file_i or '.jpeg' in file_i] """ Explanation: We could also combine file types if we happened to have multiple types: End of explanation """ files = [file_i for file_i in os.listdir('img_align_celeba') if file_i.endswith('.jpg')] """ Explanation: Let's set this list to a variable, so we can perform further actions on it: End of explanation """ print(files[0]) print(files[1]) """ Explanation: And now we can index that list using the square brackets: End of explanation """ print(files[-1]) print(files[-2]) """ Explanation: We can even go in the reverse direction, which wraps around to the end of the list: End of explanation """ import matplotlib.pyplot as plt """ Explanation: <a name="loading-an-image"></a> Loading an image matplotlib is an incredibly powerful python library which will let us play with visualization and loading of image data. We can import it like so: End of explanation """ %matplotlib inline """ Explanation: Now we can refer to the entire module by just using plt instead of matplotlib.pyplot every time. This is pretty common practice. We'll now tell matplotlib to "inline" plots using an ipython magic function: End of explanation """ # help(plt) # plt.<tab> """ Explanation: This isn't python, so won't work inside of any python script files. This only works inside notebook. What this is saying is that whenever we plot something using matplotlib, put the plots directly into the notebook, instead of using a window popup, which is the default behavior. This is something that makes notebook really useful for teaching purposes, as it allows us to keep all of our images/code in one document. Have a look at the library by using plt: End of explanation """ plt.imread? """ Explanation: plt contains a very useful function for loading images: End of explanation """ import numpy as np # help(np) # np.<tab> """ Explanation: Here we see that it actually returns a variable which requires us to use another library, NumPy. NumPy makes working with numerical data a lot easier. Let's import it as well: End of explanation """ # img = plt.imread(files[0]) # outputs: FileNotFoundError """ Explanation: Let's try loading the first image in our dataset: We have a list of filenames, and we know where they are. But we need to combine the path to the file and the filename itself. If we try and do this: End of explanation """ print(os.path.join('img_align_celeba/', files[0])) plt.imread(os.path.join('img_align_celeba/', files[0])) """ Explanation: plt.imread will not know where that file is. We can tell it where to find the file by using os.path.join: End of explanation """ files = [os.path.join('img_align_celeba', file_i) for file_i in os.listdir('img_align_celeba') if '.jpg' in file_i] """ Explanation: Now we get a bunch of numbers! I'd rather not have to keep prepending the path to my files, so I can create the list of files like so: End of explanation """ img = plt.imread(files[0]) # img.<tab> """ Explanation: Let's set this to a variable, img, and inspect a bit further what's going on: End of explanation """ img = plt.imread(files[0]) plt.imshow(img) """ Explanation: <a name="rgb-image-representation"></a> RGB Image Representation It turns out that all of these numbers are capable of describing an image. We can use the function imshow to see this: End of explanation """ img.shape # outputs: (218, 178, 3) """ Explanation: Let's break this data down a bit more. We can see the dimensions of the data using the shape accessor: End of explanation """ plt.figure() plt.imshow(img[:, :, 0]) plt.figure() plt.imshow(img[:, :, 1]) plt.figure() plt.imshow(img[:, :, 2]) """ Explanation: This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels. We can use the square brackets just like when we tried to access elements of our list: End of explanation """ np.min(img), np.max(img) """ Explanation: We use the special colon operator to say take every value in this dimension. This is saying, give me every row, every column, and the 0th dimension of the color channels. What we see now is a heatmap of our image corresponding to each color channel. <a name="understanding-data-types-and-ranges-uint8-float32"></a> Understanding data types and ranges (uint8, float32) Let's take a look at the range of values of our image: End of explanation """ 2**32 """ Explanation: The numbers are all between 0 to 255. What a strange number you might be thinking. Unless you are one of 10 types of people in this world, those that understand binary and those that don't. Don't worry if you're not. You are likely better off. 256 values is how much information we can stick into a byte. We measure a byte using bits, and each byte takes up 8 bits. Each bit can be either 0 or 1. When we stack up 8 bits, or 10000000 in binary, equivalent to 2 to the 8th power, we can express up to 256 possible values, giving us our range, 0 to 255. You can compute any number of bits using powers of two. 2 to the power of 8 is 256. How many values can you stick in 16 bits (2 bytes)? Or 32 bits (4 bytes) of information? Let's ask python: End of explanation """ img.dtype """ Explanation: numpy arrays have a field which will tell us how many bits they are using: dtype: End of explanation """ img.astype(np.float32) """ Explanation: uint8: Let's decompose that: unsigned, int, 8. That means the values do not have a sign, meaning they are all positive. They are only integers, meaning no decimal places. And that they are all 8 bits. Something which is 32-bits of information can express a single value with a range of nearly 4.3 billion different possibilities (2**32). We'll generally need to work with 32-bit data when working with neural networks. In order to do that, we can simply ask numpy for the correct data type: End of explanation """ plt.imread(files[0]) """ Explanation: This is saying, let me see this data as a floating point number, meaning with decimal places, and with 32 bits of precision, rather than the previous data types 8 bits. This will become important when we start to work with neural networks, as we'll need all of those extra possible values! <a name="visualizing-your-data-as-images"></a> Visualizing your data as images We've seen how to look at a single image. But what if we have hundreds, thousands, or millions of images? Is there a good way of knowing what our dataset looks like without looking at their file names, or opening up each image one at a time? One way we can do that is to randomly pick an image. We've already seen how to read the image located at one of our file locations: End of explanation """ print(np.random.randint(0, len(files))) print(np.random.randint(0, len(files))) print(np.random.randint(0, len(files))) """ Explanation: to pick a random image from our list of files, we can use the numpy random module: End of explanation """ filename = files[np.random.randint(0, len(files))] img = plt.imread(filename) plt.imshow(img) """ Explanation: This function will produce random integers between a range of values that we specify. We say, give us random integers from 0 to the length of files. We can now use the code we've written before to show a random image from our list of files: End of explanation """ def plot_image(filename): img = plt.imread(filename) plt.imshow(img) """ Explanation: This might be something useful that we'd like to do often. So we can use a function to help us in the future: End of explanation """ f = files[np.random.randint(0, len(files))] plot_image(f) """ Explanation: This function takes one parameter, a variable named filename, which we will have to specify whenever we call it. That variable is fed into the plt.imread function, and used to load an image. It is then drawn with plt.imshow. Let's see how we can use this function definition: End of explanation """ plot_image(files[np.random.randint(0, len(files))]) """ Explanation: or simply: End of explanation """ def imcrop_tosquare(img): if img.shape[0] > img.shape[1]: extra = (img.shape[0] - img.shape[1]) // 2 crop = img[extra:-extra, :] elif img.shape[1] > img.shape[0]: extra = (img.shape[1] - img.shape[0]) // 2 crop = img[:, extra:-extra] else: crop = img return crop """ Explanation: We use functions to help us reduce the main flow of our code. It helps to make things clearer, using function names that help describe what is going on. <a name="image-manipulation"></a> Image Manipulation <a name="cropping-images"></a> Cropping images We're going to create another function which will help us crop the image to a standard size and help us draw every image in our list of files as a grid. In many applications of deep learning, we will need all of our data to be the same size. For images this means we'll need to crop the images while trying not to remove any of the important information in it. Most image datasets that you'll find online will already have a standard size for every image. But if you're creating your own dataset, you'll need to know how to make all the images the same size. One way to do this is to find the longest edge of the image, and crop this edge to be as long as the shortest edge of the image. This will convert the image to a square one, meaning its sides will be the same lengths. The reason for doing this is that we can then resize this square image to any size we'd like, without distorting the image. Let's see how we can do that: End of explanation """ def imcrop(img, amt): if amt <= 0: return img row_i = int(img.shape[0] * amt) // 2 col_i = int(img.shape[1] * amt) // 2 return img[row_i:-row_i, col_i:-col_i] """ Explanation: There are a few things going on here. First, we are defining a function which takes as input a single variable. This variable gets named img inside the function, and we enter a set of if/else-if conditionals. The first branch says, if the rows of img are greater than the columns, then set the variable extra to their difference and divide by 2. The // notation means to perform an integer division, instead of a floating point division. So 3 // 2 = 1, not 1.5. We need integers for the next line of code which says to set the variable crop to img starting from extra rows, and ending at negative extra rows down. We can't be on row 1.5, only row 1 or 2. So that's why we need the integer divide there. Let's say our image was 128 x 96 x 3. We would have extra = (128 - 96) // 2, or 16. Then we'd start from the 16th row, and end at the -16th row, or the 112th row. That adds up to 96 rows, exactly the same number of columns as we have. Let's try another crop function which can crop by an arbitrary amount. It will take an image and a single factor from 0-1, saying how much of the original image to crop: End of explanation """ #from scipy.<tab>misc import <tab>imresize """ Explanation: <a name="resizing-images"></a> Resizing images For resizing the image, we'll make use of a python library, scipy. Let's import the function which we need like so: End of explanation """ from scipy.misc import imresize imresize? """ Explanation: Notice that you can hit tab after each step to see what is available. That is really helpful as I never remember what the exact names are. End of explanation """ square = imcrop_tosquare(img) crop = imcrop(square, 0.2) rsz = imresize(crop, (64, 64)) plt.imshow(rsz) """ Explanation: The imresize function takes a input image as its first parameter, and a tuple defining the new image shape as rows and then columns. Let's see how our cropped image can be imresized now: End of explanation """ plt.imshow(rsz, interpolation='nearest') """ Explanation: Great! To really see what's going on, let's turn off the interpolation like so: End of explanation """ mean_img = np.mean(rsz, axis=2) print(mean_img.shape) plt.imshow(mean_img, cmap='gray') """ Explanation: Each one of these squares is called a pixel. Since this is a color image, each pixel is actually a mixture of 3 values, Red, Green, and Blue. When we mix those proportions of Red Green and Blue, we get the color shown here. We can combine the Red Green and Blue channels by taking the mean, or averaging them. This is equivalent to adding each channel, R + G + B, then dividing by the number of color channels, (R + G + B) / 3. We can use the numpy.mean function to help us do this: End of explanation """ imgs = [] for file_i in files: img = plt.imread(file_i) square = imcrop_tosquare(img) crop = imcrop(square, 0.2) rsz = imresize(crop, (64, 64)) imgs.append(rsz) print(len(imgs)) """ Explanation: This is an incredibly useful function which we'll revisit later when we try to visualize the mean image of our entire dataset. <a name="croppingresizing-images"></a> Cropping/Resizing Images We now have functions for cropping an image to a square image, and a function for resizing an image to any desired size. With these tools, we can begin to create a dataset. We're going to loop over our 10 files, crop the image to a square to remove the longer edge, and then crop again to remove some of the background, and then finally resize the image to a standard size of 64 x 64 pixels. End of explanation """ plt.imshow(imgs[0]) """ Explanation: We now have a list containing our images. Each index of the imgs list is another image which we can access using the square brackets: End of explanation """ imgs[0].shape """ Explanation: Since all of the images are the same size, we can make use of numpy's array instead of a list. Remember that an image has a shape describing the height, width, channels: End of explanation """ data = np.array(imgs) data.shape """ Explanation: <a name="the-batch-dimension"></a> The Batch Dimension there is a convention for storing many images in an array using a new dimension called the batch dimension. The resulting image shape should be: N x H x W x C The Number of images, or the batch size, is first; then the Height or number of rows in the image; then the Width or number of cols in the image; then finally the number of channels the image has. A Color image should have 3 color channels, RGB. A Grayscale image should just have 1 channel. We can combine all of our images to look like this in a few ways. The easiest way is to tell numpy to give us an array of all the images: End of explanation """ data = np.concatenate([img_i[np.newaxis] for img_i in imgs], axis=0) data.shape """ Explanation: We could also use the numpy.concatenate function, but we have to create a new dimension for each image. Numpy let's us do this by using a special variable np.newaxis End of explanation """
csaladenes/csaladenes.github.io
present/mcc2/PythonDataScienceHandbook/05.08-Random-Forests.ipynb
mit
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set() """ Explanation: <!--BOOK_INFORMATION--> <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png"> This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub. The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book! <!--NAVIGATION--> < In-Depth: Support Vector Machines | Contents | In Depth: Principal Component Analysis > <a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.08-Random-Forests.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a> In-Depth: Decision Trees and Random Forests Previously we have looked in depth at a simple generative classifier (naive Bayes; see In Depth: Naive Bayes Classification) and a powerful discriminative classifier (support vector machines; see In-Depth: Support Vector Machines). Here we'll take a look at motivating another powerful algorithm—a non-parametric algorithm called random forests. Random forests are an example of an ensemble method, meaning that it relies on aggregating the results of an ensemble of simpler estimators. The somewhat surprising result with such ensemble methods is that the sum can be greater than the parts: that is, a majority vote among a number of estimators can end up being better than any of the individual estimators doing the voting! We will see examples of this in the following sections. We begin with the standard imports: End of explanation """ from sklearn.datasets import make_blobs X, y = make_blobs(n_samples=300, centers=4, random_state=0, cluster_std=1.0) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow'); """ Explanation: Motivating Random Forests: Decision Trees Random forests are an example of an ensemble learner built on decision trees. For this reason we'll start by discussing decision trees themselves. Decision trees are extremely intuitive ways to classify or label objects: you simply ask a series of questions designed to zero-in on the classification. For example, if you wanted to build a decision tree to classify an animal you come across while on a hike, you might construct the one shown here: figure source in Appendix The binary splitting makes this extremely efficient: in a well-constructed tree, each question will cut the number of options by approximately half, very quickly narrowing the options even among a large number of classes. The trick, of course, comes in deciding which questions to ask at each step. In machine learning implementations of decision trees, the questions generally take the form of axis-aligned splits in the data: that is, each node in the tree splits the data into two groups using a cutoff value within one of the features. Let's now look at an example of this. Creating a decision tree Consider the following two-dimensional data, which has one of four class labels: End of explanation """ from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier().fit(X, y) """ Explanation: A simple decision tree built on this data will iteratively split the data along one or the other axis according to some quantitative criterion, and at each level assign the label of the new region according to a majority vote of points within it. This figure presents a visualization of the first four levels of a decision tree classifier for this data: figure source in Appendix Notice that after the first split, every point in the upper branch remains unchanged, so there is no need to further subdivide this branch. Except for nodes that contain all of one color, at each level every region is again split along one of the two features. This process of fitting a decision tree to our data can be done in Scikit-Learn with the DecisionTreeClassifier estimator: End of explanation """ def visualize_classifier(model, X, y, ax=None, cmap='rainbow'): ax = ax or plt.gca() # Plot the training points ax.scatter(X[:, 0], X[:, 1], c=y, s=30, cmap=cmap, clim=(y.min(), y.max()), zorder=3) ax.axis('tight') ax.axis('off') xlim = ax.get_xlim() ylim = ax.get_ylim() # fit the estimator model.fit(X, y) xx, yy = np.meshgrid(np.linspace(*xlim, num=200), np.linspace(*ylim, num=200)) Z = model.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) # Create a color plot with the results n_classes = len(np.unique(y)) contours = ax.contourf(xx, yy, Z, alpha=0.3, levels=np.arange(n_classes + 1) - 0.5, cmap=cmap, zorder=1) ax.set(xlim=xlim, ylim=ylim) """ Explanation: Let's write a quick utility function to help us visualize the output of the classifier: End of explanation """ visualize_classifier(DecisionTreeClassifier(), X, y) """ Explanation: Now we can examine what the decision tree classification looks like: End of explanation """ # helpers_05_08 is found in the online appendix import helpers_05_08 helpers_05_08.plot_tree_interactive(X, y); """ Explanation: If you're running this notebook live, you can use the helpers script included in The Online Appendix to bring up an interactive visualization of the decision tree building process: End of explanation """ # helpers_05_08 is found in the online appendix import helpers_05_08 helpers_05_08.randomized_tree_interactive(X, y) """ Explanation: Notice that as the depth increases, we tend to get very strangely shaped classification regions; for example, at a depth of five, there is a tall and skinny purple region between the yellow and blue regions. It's clear that this is less a result of the true, intrinsic data distribution, and more a result of the particular sampling or noise properties of the data. That is, this decision tree, even at only five levels deep, is clearly over-fitting our data. Decision trees and over-fitting Such over-fitting turns out to be a general property of decision trees: it is very easy to go too deep in the tree, and thus to fit details of the particular data rather than the overall properties of the distributions they are drawn from. Another way to see this over-fitting is to look at models trained on different subsets of the data—for example, in this figure we train two different trees, each on half of the original data: figure source in Appendix It is clear that in some places, the two trees produce consistent results (e.g., in the four corners), while in other places, the two trees give very different classifications (e.g., in the regions between any two clusters). The key observation is that the inconsistencies tend to happen where the classification is less certain, and thus by using information from both of these trees, we might come up with a better result! If you are running this notebook live, the following function will allow you to interactively display the fits of trees trained on a random subset of the data: End of explanation """ from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import BaggingClassifier tree = DecisionTreeClassifier() bag = BaggingClassifier(tree, n_estimators=100, max_samples=0.8, random_state=1) bag.fit(X, y) visualize_classifier(bag, X, y) """ Explanation: Just as using information from two trees improves our results, we might expect that using information from many trees would improve our results even further. Ensembles of Estimators: Random Forests This notion—that multiple overfitting estimators can be combined to reduce the effect of this overfitting—is what underlies an ensemble method called bagging. Bagging makes use of an ensemble (a grab bag, perhaps) of parallel estimators, each of which over-fits the data, and averages the results to find a better classification. An ensemble of randomized decision trees is known as a random forest. This type of bagging classification can be done manually using Scikit-Learn's BaggingClassifier meta-estimator, as shown here: End of explanation """ from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier(n_estimators=100, random_state=0) visualize_classifier(model, X, y); """ Explanation: In this example, we have randomized the data by fitting each estimator with a random subset of 80% of the training points. In practice, decision trees are more effectively randomized by injecting some stochasticity in how the splits are chosen: this way all the data contributes to the fit each time, but the results of the fit still have the desired randomness. For example, when determining which feature to split on, the randomized tree might select from among the top several features. You can read more technical details about these randomization strategies in the Scikit-Learn documentation and references within. In Scikit-Learn, such an optimized ensemble of randomized decision trees is implemented in the RandomForestClassifier estimator, which takes care of all the randomization automatically. All you need to do is select a number of estimators, and it will very quickly (in parallel, if desired) fit the ensemble of trees: End of explanation """ rng = np.random.RandomState(42) x = 10 * rng.rand(200) def model(x, sigma=0.3): fast_oscillation = np.sin(5 * x) slow_oscillation = np.sin(0.5 * x) noise = sigma * rng.randn(len(x)) return slow_oscillation + fast_oscillation + noise y = model(x) plt.errorbar(x, y, 0.3, fmt='o'); """ Explanation: We see that by averaging over 100 randomly perturbed models, we end up with an overall model that is much closer to our intuition about how the parameter space should be split. Random Forest Regression In the previous section we considered random forests within the context of classification. Random forests can also be made to work in the case of regression (that is, continuous rather than categorical variables). The estimator to use for this is the RandomForestRegressor, and the syntax is very similar to what we saw earlier. Consider the following data, drawn from the combination of a fast and slow oscillation: End of explanation """ from sklearn.ensemble import RandomForestRegressor forest = RandomForestRegressor(200) forest.fit(x[:, None], y) xfit = np.linspace(0, 10, 1000) yfit = forest.predict(xfit[:, None]) ytrue = model(xfit, sigma=0) plt.errorbar(x, y, 0.3, fmt='o', alpha=0.5) plt.plot(xfit, yfit, '-r'); plt.plot(xfit, ytrue, '-k', alpha=0.5); """ Explanation: Using the random forest regressor, we can find the best fit curve as follows: End of explanation """ from sklearn.datasets import load_digits digits = load_digits() digits.keys() """ Explanation: Here the true model is shown in the smooth gray curve, while the random forest model is shown by the jagged red curve. As you can see, the non-parametric random forest model is flexible enough to fit the multi-period data, without us needing to specifying a multi-period model! Example: Random Forest for Classifying Digits Earlier we took a quick look at the hand-written digits data (see Introducing Scikit-Learn). Let's use that again here to see how the random forest classifier can be used in this context. End of explanation """ # set up the figure fig = plt.figure(figsize=(6, 6)) # figure size in inches fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) # plot the digits: each image is 8x8 pixels for i in range(64): ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[]) ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest') # label the image with the target value ax.text(0, 7, str(digits.target[i])) """ Explanation: To remind us what we're looking at, we'll visualize the first few data points: End of explanation """ from sklearn.model_selection import train_test_split Xtrain, Xtest, ytrain, ytest = train_test_split(digits.data, digits.target, random_state=0) model = RandomForestClassifier(n_estimators=1000) model.fit(Xtrain, ytrain) ypred = model.predict(Xtest) """ Explanation: We can quickly classify the digits using a random forest as follows: End of explanation """ from sklearn import metrics print(metrics.classification_report(ypred, ytest)) """ Explanation: We can take a look at the classification report for this classifier: End of explanation """ from sklearn.metrics import confusion_matrix mat = confusion_matrix(ytest, ypred) sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False) plt.xlabel('true label') plt.ylabel('predicted label'); """ Explanation: And for good measure, plot the confusion matrix: End of explanation """
gabicfa/RedesSociais
encontro02/.ipynb_checkpoints/1-introducao-checkpoint.ipynb
gpl-3.0
import sys sys.path.append('..') import socnet as sn """ Explanation: Encontro 02, Parte 1: Revisão de Grafos Este guia foi escrito para ajudar você a atingir os seguintes objetivos: formalizar conceitos básicos de teoria dos grafos; usar funcionalidades básicas da biblioteca da disciplina. Grafos não-dirigidos Um grafo não-dirigido (undirected graph) é um par $(N, E)$, onde $N$ é um conjunto qualquer e $E$ é um conjunto de pares não-ordenados de elementos de $N$, ou seja, $E \subseteq {{n, m} \colon n \in N \textrm{ e } m \in N}$. Um elemento de $N$ chama-se nó (node) e um elemento de $E$ chama-se aresta (edge). Em alguns trabalhos, usa-se $V$ e vértice em vez de $N$ e nó. Grafos dirigidos Formalmente, um grafo dirigido (directed graph) é um par $(N, E)$, onde $N$ é um conjunto qualquer e $E$ é um conjunto de pares ordenados de elementos de N, ou seja, $E \subseteq {(n, m) \colon n \in N \textrm{ e } m \in N}$. Um elemento de $N$ chama-se nó (node) e um elemento de $E$ chama-se aresta (edge). Em alguns trabalhos, usa-se $V$ e vértice em vez de $N$ e nó e usa-se $A$ e arco em vez de $E$ e aresta. Instalando as dependências Antes de continuar, instale as duas dependências da biblioteca da disciplina: pip install networkx plotly Em algumas distribuições Linux você deve usar o comando pip3, pois o comando pip está associado a Python 2 por padrão. Importando a biblioteca Não mova ou renomeie os arquivos do repositório, a menos que você esteja disposto a adaptar os notebooks de acordo. Vamos importar a biblioteca da disciplina no notebook: End of explanation """ sn.graph_width = 800 sn.graph_height = 450 sn.node_size = 20 sn.node_color = (255, 255, 255) sn.edge_width = 2 sn.edge_color = (0, 0, 0) sn.node_label_position = 'middle center' sn.edge_label_distance = 10 """ Explanation: Configurando a biblioteca A socnet disponibiliza variáveis de módulo que permitem configurar propriedades visuais. Os nomes são auto-explicativos e os valores abaixo são padrão. End of explanation """ ug = sn.load_graph('5-kruskal.gml', has_pos=True) dg = sn.load_graph('4-dijkstra.gml', has_pos=True) """ Explanation: Uma variável de cor armazena uma tupla contendo três inteiros entre 0 e 255 que representam intensidades de vermelho, verde e azul respectivamente. Uma variável de posição armazena uma string contendo duas palavras separadas por um espaço: * a primeira representa o alinhamento vertical e pode ser top, middle ou bottom; * a segunda representa o alinhamento horizontal e pode ser left, center ou right. Carregando grafos Vamos carregar dois grafos no formato GML: End of explanation """ sn.graph_width = 320 sn.graph_height = 180 sn.show_graph(ug) """ Explanation: Abra esses arquivos em um editor de texto e note como o formato é auto-explicativo. Visualizando grafos Vamos visualizar o primeiro grafo, que é não-dirigido: End of explanation """ sn.graph_width = 320 sn.graph_height = 180 sn.show_graph(dg) """ Explanation: Essa é a representação mais comum de grafos não-dirigidos: círculos como nós e retas como arestas. Se uma reta conecta o círculo que representa $n$ ao círculo que representa $m$, ela representa a aresta ${n, m}$. Vamos agora visualizar o segundo grafo, que é dirigido: End of explanation """ ug.node[0]['color'] = (0, 0, 255) print(ug.node[0]['color']) """ Explanation: Essa é a representação mais comum de grafos dirigidos: círculos como nós e setas como arestas. Se uma seta sai do círculo que representa $n$ e entra no círculo que representa $m$, ela representa a aresta $(n, m)$. Note que as duas primeiras linhas não são necessárias se você rodou a célula anterior, pois os valores atribuídos a graph_width e graph_height são exatamente iguais. Atributos de nós e arestas Na estrutura de dados usada pela socnet, os nós são inteiros e cada nó é asssociado a um dicionário que armazena seus atributos. Vamos modificar e imprimir o atributo color do nó $0$ do grafo ug. Esse atributo existe por padrão. End of explanation """ ug.edge[1][2]['color'] = (0, 255, 0) print(ug.edge[1][2]['color']) """ Explanation: Cada aresta também é asssociada a um dicionário que armazena seus atributos. Vamos modificar e imprimir o atributo color da aresta ${1, 2}$ do grafo ug. Esse atributo existe por padrão. End of explanation """ ug.edge[2][1]['color'] = (255, 0, 255) print(ug.edge[1][2]['color']) """ Explanation: Note que a ordem dos nós não importa, pois ug é um grafo não-dirigido. End of explanation """ sn.show_graph(ug) """ Explanation: Os atributos color são exibidos na visualização. End of explanation """ sn.reset_node_colors(ug) sn.reset_edge_colors(ug) sn.show_graph(ug) """ Explanation: Podemos usar funções de conveniência para reinicializar as cores. End of explanation """ for n in ug.nodes(): ug.node[n]['label'] = str(n) for n, m in ug.edges(): ug.edge[n][m]['label'] = '?' for n in dg.nodes(): dg.node[n]['label'] = str(n) for n, m in dg.edges(): dg.edge[n][m]['label'] = '?' """ Explanation: Os atributos label também podem ser exibidos na visualização, mas não existem por padrão. Primeiramente, precisamos criá-los. End of explanation """ sn.show_graph(ug, nlab=True, elab=True) sn.show_graph(dg, nlab=True, elab=True) """ Explanation: Depois, precisamos usar os argumentos nlab e elab para indicar que queremos exibi-los. Esses argumentos são False por padrão. End of explanation """ print(ug.neighbors(0)) """ Explanation: Vizinhos, predecessores e sucessores Considere um grafo $(N, E)$ e um nó $n$. Suponha que esse grafo é não-dirigido. Nesse caso, dizemos que $n$ é vizinho (neighbor) de $m$ se ${n, m} \in E$. Denotamos por $\mathcal{N}(n)$ o conjunto dos vizinhos de $n$. End of explanation """ print(dg.successors(0)) print(dg.predecessors(1)) """ Explanation: Suponha agora que o grafo $(N, E)$ é dirigido. Nesse caso, dizemos que $n$ é predecessor de $m$ se $(n, m) \in E$ e dizemos que $n$ é sucessor de $m$ se $(m, n) \in E$. Denotamos por $\mathcal{P}(n)$ o conjunto dos predecessores de $n$ e denotamos por $\mathcal{S}(n)$ o conjunto dos sucessores de $n$. End of explanation """ ug.node[0]['color'] = (0, 0, 255) ug.node[1]['color'] = (0, 0, 255) ug.node[2]['color'] = (0, 0, 255) ug.node[3]['color'] = (0, 0, 255) ug.node[4]['color'] = (0, 0, 255) ug.node[5]['color'] = (0, 0, 255) ug.edge[0][1]['color'] = (0, 255, 0) ug.edge[1][2]['color'] = (0, 255, 0) ug.edge[2][3]['color'] = (0, 255, 0) ug.edge[3][4]['color'] = (0, 255, 0) ug.edge[4][5]['color'] = (0, 255, 0) sn.show_graph(ug) """ Explanation: Passeios, trilhas e caminhos Se $(N, E)$ é um grafo não-dirigido: um passeio (walk) é uma sequência de nós $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ tal que, para todo $i$ entre $0$ e $k-2$, temos que ${n_i, n_{i + 1}} \in E$; uma trilha (trail) é um passeio $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ no qual não existem índices $i$ e $j$ entre $0$ e $k-2$ tais que $i \neq j$ e ${n_i, n_{i+1}} = {n_j, n_{j+1}}$; um caminho (path) é um passeio $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ no qual não existem índices $i$ e $j$ entre $0$ e $k-1$ tais que $i \neq j$ e $n_i = n_j$. Se $(N, E)$ é um grafo dirigido: um passeio (walk) é uma sequência de nós $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ tal que, para todo $i$ entre $0$ e $k-2$, temos que $(n_i, n_{i + 1}) \in E$; uma trilha (trail) é um passeio $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ no qual não existem índices $i$ e $j$ entre $0$ e $k-2$ tais que $i \neq j$ e $(n_i, n_{i+1}) = (n_j, n_{j+1})$; um caminho (path) é um passeio $\langle n_0, n_1, \ldots, n_{k-1} \rangle$ no qual não existem índices $i$ e $j$ entre $0$ e $k-1$ tais que $i \neq j$ e $n_i = n_j$. Pode-se dizer que uma trilha é um passeio que não repete arestas e um caminho é um passeio que não repete nós. Exercício 1 Dê um exemplo de passeio que não é trilha no grafo ug. Um passeio que não é trilha é o seguinte: - 0, 1, 7, 8, 6, 7, 1, 0 Exercício 2 Dê um exemplo de passeio que não é trilha no grafo dg. Um exemplo de passeio que não é trilha é o seguinte: - 0, 1, 3, 4, 0, 1, 2 Exercício 3 Dê um exemplo de trilha que não é caminho no grafo ug. Um exemplo de trilha que não é caminho é o seguinte: - 0, 1, 2, 5, 6, 8, 2, 3, 4, 5, 3 Exercício 4 Dê um exemplo de trilha que não é caminho no grafo dg. Um exemplo de trilha que não é caminho é o seguinte: - 0, 1, 3, 2, 4, 2 Exercício 5 Use cores para dar um exemplo de caminho no grafo ug. End of explanation """ dg.node[0]['color'] = (0, 0, 255) dg.edge[0][1]['color'] = (0, 255, 0) dg.node[1]['color'] = (0, 0, 255) dg.edge[1][3]['color'] = (0, 255, 0) dg.node[3]['color'] = (0, 0, 255) dg.edge[3][2]['color'] = (0, 255, 0) dg.node[2]['color'] = (0, 0, 255) dg.edge[2][4]['color'] = (0, 255, 0) dg.node[4]['color'] = (0, 0, 255) sn.show_graph(dg) """ Explanation: Exercício 6 Use cores para dar um exemplo de caminho no grafo dg. End of explanation """ sn.graph_width = 450 sn.graph_height = 450 sn.node_label_position = 'hover' # easter egg! g = sn.load_graph('1-introducao.gml', has_pos=True) sn.show_graph(g, nlab=True) """ Explanation: Posicionamento dos nós Para encerrar, vamos carregar o grafo do encontro anterior. O próprio arquivo atribui label aos nós, portanto não é necessário criá-los. End of explanation """ g = sn.load_graph('1-introducao.gml') sn.show_graph(g, nlab=True) """ Explanation: Usamos o argumento has_pos para indicar que os atributos x e y devem ser usados para posicionar os nós. Esse argumento é False por padrão, pois nem todo arquivo atribui essas coordenadas. Se elas não forem usadas, a visualização usa um tipo de force-directed graph drawing. End of explanation """
kit-cel/wt
sigNT/tutorial/taxi_problem.ipynb
gpl-2.0
# importing import numpy as np import matplotlib.pyplot as plt import matplotlib # showing figures inline %matplotlib inline # plotting options font = {'size' : 30} plt.rc('font', **font) plt.rc('text', usetex=True) matplotlib.rc('figure', figsize=(30, 15) ) """ Explanation: Content and Objective Show result of "taxi problem": Given $\Omega={1,...,N}$ Observe $K$ samples $X_1=x_1, ..., X_K=x_K$ (assumed to be different) Estimate for $N$ based on the samples: $$\hat{N}=S(x_1,...,x_K)$$ Method: Sample groups and get estimator End of explanation """ # define (unknown) group size N = 1000#np.random.randint( 1000 ) taxis = [ t for t in range( N ) ] # size of subgroup to be observed M = N // 10 # number of observations N_obser = int( 1e3 ) # allowing for multiple observations (Modus Operandi: same number twice)? MO = 0 # Sample distances used in order statistics for doing quantils Q = 10 """ Explanation: Parameters End of explanation """ # initialize array for collecting several estimations to evaluate bias of estimator estimators = np.zeros( N_obser ) for _k in range( N_obser ): sample = np.random.choice( taxis, replace=MO ) estimators[ _k ] = np.max( sample ) # get sample average and compare to true value print('True value: {}'.format( N ) ) print('Average estimate: {}'.format( np.average( estimators ) ) ) """ Explanation: Observe and Estimate Using Max-Estimator First using $\hat{N}=\max{x_1,..., x_k}$, equalling ML estimation since: $$ P(X_1=x_1,\ldots, X_K=x_K|N) = \prod\limits_{i=1}^K P(X_i=x_i|N) = \begin{cases} \left( \frac{1}{N}\right)^K , & x_1\leq N, \ldots, x_K\leq N \ 0, & \text{ otherwise} \end{cases} . $$ Thus, prob. is maximized if $N$ is chose to be as small as possible, corresponding to the largest $x_i$ Show true group size and sample mean of estimation End of explanation """ # define range of group size to be analyzed group_sizes = range( 1, N//10, 5 ) # initialize arrays for collecting estimator values est_max = np.zeros_like( group_sizes ) est_maxmin = np.zeros_like( group_sizes ) #est_quantiles = np.zeros_like( group_sizes ) est_avg = np.zeros_like( group_sizes ) # loop for group sizes for ind_gs, val_gs in enumerate( group_sizes ): # initialize array for collecting several estimations to evaluate bias of estimator estimators_max = np.zeros( N_obser ) estimators_maxmin = np.zeros( N_obser ) estimators_avg = np.zeros( N_obser ) #estimators_quantiles = np.zeros( N_obser ) # loop for realizations for _k in range( N_obser ): # sample and get estimator sample = np.random.choice( taxis, size = val_gs, replace = MO ) estimators_max[ _k ] = np.max( sample ) estimators_maxmin[ _k ] = np.max( sample ) + np.min( sample ) #estimators_quantiles[ _k ] = np.sort( sample )[ -Q ] + np.sort( sample )[ Q ] estimators_avg[ _k ] = 2 * np.average( sample ) # find average value of estimation for given group size est_max[ ind_gs ] = np.average( estimators_max ) est_maxmin[ ind_gs ] = np.average( estimators_maxmin ) est_avg[ ind_gs ] = np.average( estimators_avg ) #est_quantiles[ ind_gs ] = np.average( estimators_quantiles ) plt.figure() plt.plot( group_sizes, est_max, label='$\hat{E}( \\max(X_1,\ldots,X_K) )$' ) plt.plot( group_sizes, est_maxmin, label='$\hat{E}( \\max(X_1,\ldots,X_K) + \\min(X_1,\ldots,X_K) )$' ) plt.plot( group_sizes, est_avg, label='$\hat{E}( \\frac{2}{K}\sum_{i=1}^K X_i)$' ) #plt.plot( group_sizes, est_quantiles, label='$\hat{E}( X_{(Q)} + X_{(K-Q)} )$' ) plt.grid(True) plt.legend( loc='best' ) plt.title('N = {}'.format(N) ) plt.xlabel('$K$') """ Explanation: Show Results for Additional Estimators Additional esimators $\hat{N}=\max{x_1,...,x_K}+\min{x_1,...,x_K}$ $\hat{N}=2\cdot \frac{1}{K}( x_1+ \cdots+ x_K)$ End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/imbalanced_data.ipynb
apache-2.0
# Import necessary libraries. import tensorflow as tf from tensorflow import keras import os import tempfile import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import sklearn from sklearn.metrics import confusion_matrix from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler mpl.rcParams['figure.figsize'] = (12, 10) colors = plt.rcParams['axes.prop_cycle'].by_key()['color'] """ Explanation: Classifying the imbalanced data Learning Objectives Examine the class label imbalance. Clean, split and normalize the data. Define the model and metrics. Build the model. Train the model. Evaluate metrics. Calculate class weights and train a model with class weights. Train on the oversampled data. Introduction In this tutorial, you will demonstrates how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the Credit Card Fraud Detection dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use Keras to define the model and class weights to help the model learn from the imbalanced data. . This tutorial contains complete code to: Load a CSV file using Pandas. Create train, validation, and test sets. Define and train a model using Keras (including setting class weights). Evaluate the model using various metrics (including precision and recall). Try common techniques for dealing with imbalanced data like: Class weighting Oversampling Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. Setup End of explanation """ file = tf.keras.utils raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv') raw_df.head() raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe() """ Explanation: Data processing and exploration Download the Kaggle Credit Card Fraud data set Pandas is a Python library with many helpful utilities for loading and working with structured data. It can be used to download CSVs into a Pandas DataFrame. Note: This dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available here and the page of the DefeatFraud project End of explanation """ neg, pos = np.bincount(raw_df['Class']) total = neg + pos print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format( total, pos, 100 * pos / total)) """ Explanation: Examine the class label imbalance Let's look at the dataset imbalance: End of explanation """ # TODO 1 cleaned_df = raw_df.copy() # You don't want the `Time` column. # TODO: Your code goes here # The `Amount` column covers a huge range. Convert to log-space. eps = 0.001 # 0 => 0.1¢ # TODO: Your code goes here """ Explanation: This shows the small fraction of positive samples. Clean, split and normalize the data The raw data has a few issues. First the Time and Amount columns are too variable to use directly. Drop the Time column (since it's not clear what it means) and take the log of the Amount column to reduce its range. End of explanation """ # Use a utility from sklearn to split and shuffle your dataset. train_df, test_df = train_test_split(cleaned_df, test_size=0.2) train_df, val_df = train_test_split(train_df, test_size=0.2) # Form np arrays of labels and features. train_labels = np.array(train_df.pop('Class')) bool_train_labels = train_labels != 0 val_labels = np.array(val_df.pop('Class')) test_labels = np.array(test_df.pop('Class')) train_features = np.array(train_df) val_features = np.array(val_df) test_features = np.array(test_df) """ Explanation: Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where overfitting is a significant concern from the lack of training data. End of explanation """ scaler = StandardScaler() train_features = scaler.fit_transform(train_features) val_features = scaler.transform(val_features) test_features = scaler.transform(test_features) train_features = np.clip(train_features, -5, 5) val_features = np.clip(val_features, -5, 5) test_features = np.clip(test_features, -5, 5) print('Training labels shape:', train_labels.shape) print('Validation labels shape:', val_labels.shape) print('Test labels shape:', test_labels.shape) print('Training features shape:', train_features.shape) print('Validation features shape:', val_features.shape) print('Test features shape:', test_features.shape) """ Explanation: Normalize the input features using the sklearn StandardScaler. This will set the mean to 0 and standard deviation to 1. Note: The StandardScaler is only fit using the train_features to be sure the model is not peeking at the validation or test sets. End of explanation """ pos_df = pd.DataFrame(train_features[ bool_train_labels], columns=train_df.columns) neg_df = pd.DataFrame(train_features[~bool_train_labels], columns=train_df.columns) sns.jointplot(pos_df['V5'], pos_df['V6'], kind='hex', xlim=(-5,5), ylim=(-5,5)) plt.suptitle("Positive distribution") sns.jointplot(neg_df['V5'], neg_df['V6'], kind='hex', xlim=(-5,5), ylim=(-5,5)) _ = plt.suptitle("Negative distribution") """ Explanation: Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export. Look at the data distribution Next compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are: Do these distributions make sense? Yes. You've normalized the input and these are mostly concentrated in the +/- 2 range. Can you see the difference between the distributions? Yes the positive examples contain a much higher rate of extreme values. End of explanation """ METRICS = [ keras.metrics.TruePositives(name='tp'), keras.metrics.FalsePositives(name='fp'), keras.metrics.TrueNegatives(name='tn'), keras.metrics.FalseNegatives(name='fn'), keras.metrics.BinaryAccuracy(name='accuracy'), keras.metrics.Precision(name='precision'), keras.metrics.Recall(name='recall'), keras.metrics.AUC(name='auc'), keras.metrics.AUC(name='prc', curve='PR'), # precision-recall curve ] def make_model(metrics=METRICS, output_bias=None): if output_bias is not None: output_bias = tf.keras.initializers.Constant(output_bias) model = keras.Sequential([ keras.layers.Dense( 16, activation='relu', input_shape=(train_features.shape[-1],)), keras.layers.Dropout(0.5), keras.layers.Dense(1, activation='sigmoid', bias_initializer=output_bias), ]) model.compile( optimizer=keras.optimizers.Adam(learning_rate=1e-3), loss=keras.losses.BinaryCrossentropy(), metrics=metrics) return model """ Explanation: Define the model and metrics Define a function that creates a simple neural network with a densly connected hidden layer, a dropout layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent: End of explanation """ EPOCHS = 100 BATCH_SIZE = 2048 early_stopping = tf.keras.callbacks.EarlyStopping( monitor='val_prc', verbose=1, patience=10, mode='max', restore_best_weights=True) # TODO 2 # Create and train the model by calling the `make_model()` function. model = # TODO: Your code goes here model.summary() """ Explanation: Understanding useful metrics Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance. False negatives and false positives are samples that were incorrectly classified True negatives and true positives are samples that were correctly classified Accuracy is the percentage of examples correctly classified $\frac{\text{true samples}}{\text{total samples}}$ Precision is the percentage of predicted positives that were correctly classified $\frac{\text{true positives}}{\text{true positives + false positives}}$ Recall is the percentage of actual positives that were correctly classified $\frac{\text{true positives}}{\text{true positives + false negatives}}$ AUC refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than a random negative sample. AUPRC refers to Area Under the Curve of the Precision-Recall Curve. This metric computes precision-recall pairs for different probability thresholds. Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time. Read more: * True vs. False and Positive vs. Negative * Accuracy * Precision and Recall * ROC-AUC * Relationship between Precision-Recall and ROC Curves Baseline model Build the model Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from. Note: this model will not handle the class imbalance well. You will improve it later in this tutorial. End of explanation """ model.predict(train_features[:10]) """ Explanation: Test run the model: End of explanation """ results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0) print("Loss: {:0.4f}".format(results[0])) """ Explanation: Optional: Set the correct initial bias. These initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: A Recipe for Training Neural Networks: "init well"). This can help with initial convergence. With the default bias initialization the loss should be about math.log(2) = 0.69314 End of explanation """ initial_bias = np.log([pos/neg]) initial_bias """ Explanation: The correct bias to set can be derived from: $$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$ $$ b_0 = -log_e(1/p_0 - 1) $$ $$ b_0 = log_e(pos/neg)$$ End of explanation """ model = make_model(output_bias=initial_bias) model.predict(train_features[:10]) """ Explanation: Set that as the initial bias, and the model will give much more reasonable initial guesses. It should be near: pos/total = 0.0018 End of explanation """ results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0) print("Loss: {:0.4f}".format(results[0])) """ Explanation: With this initialization the initial loss should be approximately: $$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$ End of explanation """ initial_weights = os.path.join(tempfile.mkdtemp(), 'initial_weights') model.save_weights(initial_weights) """ Explanation: This initial loss is about 50 times less than if would have been with naive initialization. This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training. Checkpoint the initial weights To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training: End of explanation """ model = make_model() model.load_weights(initial_weights) model.layers[-1].bias.assign([0.0]) zero_bias_history = model.fit( train_features, train_labels, batch_size=BATCH_SIZE, epochs=20, validation_data=(val_features, val_labels), verbose=0) model = make_model() model.load_weights(initial_weights) careful_bias_history = model.fit( train_features, train_labels, batch_size=BATCH_SIZE, epochs=20, validation_data=(val_features, val_labels), verbose=0) def plot_loss(history, label, n): # Use a log scale on y-axis to show the wide range of values. plt.semilogy(history.epoch, history.history['loss'], color=colors[n], label='Train ' + label) plt.semilogy(history.epoch, history.history['val_loss'], color=colors[n], label='Val ' + label, linestyle="--") plt.xlabel('Epoch') plt.ylabel('Loss') plot_loss(zero_bias_history, "Zero Bias", 0) plot_loss(careful_bias_history, "Careful Bias", 1) """ Explanation: Confirm that the bias fix helps Before moving on, confirm quick that the careful bias initialization actually helped. Train the model for 20 epochs, with and without this careful initialization, and compare the losses: End of explanation """ # TODO 3 # Train the model. model = make_model() model.load_weights(initial_weights) baseline_history = # TODO: Your code goes here """ Explanation: The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage. Train the model End of explanation """ def plot_metrics(history): metrics = ['loss', 'prc', 'precision', 'recall'] for n, metric in enumerate(metrics): name = metric.replace("_"," ").capitalize() plt.subplot(2,2,n+1) plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train') plt.plot(history.epoch, history.history['val_'+metric], color=colors[0], linestyle="--", label='Val') plt.xlabel('Epoch') plt.ylabel(name) if metric == 'loss': plt.ylim([0, plt.ylim()[1]]) elif metric == 'auc': plt.ylim([0.8,1]) else: plt.ylim([0,1]) plt.legend() plot_metrics(baseline_history) """ Explanation: Check training history In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in the Overfit and underfit tutorial. Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example. End of explanation """ train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE) test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE) def plot_cm(labels, predictions, p=0.5): cm = confusion_matrix(labels, predictions > p) plt.figure(figsize=(5,5)) sns.heatmap(cm, annot=True, fmt="d") plt.title('Confusion matrix @{:.2f}'.format(p)) plt.ylabel('Actual label') plt.xlabel('Predicted label') print('Legitimate Transactions Detected (True Negatives): ', cm[0][0]) print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1]) print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0]) print('Fraudulent Transactions Detected (True Positives): ', cm[1][1]) print('Total Fraudulent Transactions: ', np.sum(cm[1])) """ Explanation: Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model. Evaluate metrics You can use a confusion matrix to summarize the actual vs. predicted labels, where the X axis is the predicted label and the Y axis is the actual label: End of explanation """ # TODO 4 # Evaluate your model on test dataset. baseline_results = # TODO: Your code goes here for name, value in zip(model.metrics_names, baseline_results): print(name, ': ', value) print() plot_cm(test_labels, test_predictions_baseline) """ Explanation: Evaluate your model on the test dataset and display the results for the metrics you created above: End of explanation """ def plot_roc(name, labels, predictions, **kwargs): fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions) plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs) plt.xlabel('False positives [%]') plt.ylabel('True positives [%]') plt.xlim([-0.5,20]) plt.ylim([80,100.5]) plt.grid(True) ax = plt.gca() ax.set_aspect('equal') plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0]) plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--') plt.legend(loc='lower right') """ Explanation: If the model had predicted everything perfectly, this would be a diagonal matrix where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity. Plot the ROC Now plot the ROC. This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold. End of explanation """ def plot_prc(name, labels, predictions, **kwargs): precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, predictions) plt.plot(precision, recall, label=name, linewidth=2, **kwargs) plt.xlabel('Recall') plt.ylabel('Precision') plt.grid(True) ax = plt.gca() ax.set_aspect('equal') plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0]) plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--') plt.legend(loc='lower right') """ Explanation: Plot the AUPRC Now plot the AUPRC. Area under the interpolated precision-recall curve, obtained by plotting (recall, precision) points for different values of the classification threshold. Depending on how it's calculated, PR AUC may be equivalent to the average precision of the model. End of explanation """ # Scaling by total/2 helps keep the loss to a similar magnitude. # The sum of the weights of all examples stays the same. weight_for_0 = (1 / neg) * (total / 2.0) weight_for_1 = (1 / pos) * (total / 2.0) class_weight = {0: weight_for_0, 1: weight_for_1} print('Weight for class 0: {:.2f}'.format(weight_for_0)) print('Weight for class 1: {:.2f}'.format(weight_for_1)) """ Explanation: It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness. Class weights Calculate class weights The goal is to identify fraudulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class. End of explanation """ # TODO 5 # Train the model with class weights. weighted_model = make_model() weighted_model.load_weights(initial_weights) weighted_history = # TODO: Your code goes here """ Explanation: Train a model with class weights Now try re-training and evaluating the model with class weights to see how that affects the predictions. Note: Using class_weights changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like tf.keras.optimizers.SGD, may fail. The optimizer used here, tf.keras.optimizers.Adam, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models. End of explanation """ plot_metrics(weighted_history) """ Explanation: Check training history End of explanation """ train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE) test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE) weighted_results = weighted_model.evaluate(test_features, test_labels, batch_size=BATCH_SIZE, verbose=0) for name, value in zip(weighted_model.metrics_names, weighted_results): print(name, ': ', value) print() plot_cm(test_labels, test_predictions_weighted) """ Explanation: Evaluate metrics End of explanation """ plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0]) plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--') plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1]) plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--') plt.legend(loc='lower right') """ Explanation: Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade-offs between these different types of errors for your application. Plot the ROC End of explanation """ plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0]) plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--') plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1]) plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--') plt.legend(loc='lower right') """ Explanation: Plot the AUPRC End of explanation """ pos_features = train_features[bool_train_labels] neg_features = train_features[~bool_train_labels] pos_labels = train_labels[bool_train_labels] neg_labels = train_labels[~bool_train_labels] """ Explanation: Oversampling Oversample the minority class A related approach would be to resample the dataset by oversampling the minority class. End of explanation """ ids = np.arange(len(pos_features)) choices = np.random.choice(ids, len(neg_features)) res_pos_features = pos_features[choices] res_pos_labels = pos_labels[choices] res_pos_features.shape resampled_features = np.concatenate([res_pos_features, neg_features], axis=0) resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0) order = np.arange(len(resampled_labels)) np.random.shuffle(order) resampled_features = resampled_features[order] resampled_labels = resampled_labels[order] resampled_features.shape """ Explanation: Using NumPy You can balance the dataset manually by choosing the right number of random indices from the positive examples: End of explanation """ BUFFER_SIZE = 100000 def make_ds(features, labels): ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache() ds = ds.shuffle(BUFFER_SIZE).repeat() return ds pos_ds = make_ds(pos_features, pos_labels) neg_ds = make_ds(neg_features, neg_labels) """ Explanation: Using tf.data If you're using tf.data the easiest way to produce balanced examples is to start with a positive and a negative dataset, and merge them. See the tf.data guide for more examples. End of explanation """ for features, label in pos_ds.take(1): print("Features:\n", features.numpy()) print() print("Label: ", label.numpy()) """ Explanation: Each dataset provides (feature, label) pairs: End of explanation """ resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5]) resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2) for features, label in resampled_ds.take(1): print(label.numpy().mean()) """ Explanation: Merge the two together using experimental.sample_from_datasets: End of explanation """ resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE) resampled_steps_per_epoch """ Explanation: To use this dataset, you'll need the number of steps per epoch. The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once: End of explanation """ resampled_model = make_model() resampled_model.load_weights(initial_weights) # Reset the bias to zero, since this dataset is balanced. output_layer = resampled_model.layers[-1] output_layer.bias.assign([0]) val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache() val_ds = val_ds.batch(BATCH_SIZE).prefetch(2) resampled_history = resampled_model.fit( resampled_ds, epochs=EPOCHS, steps_per_epoch=resampled_steps_per_epoch, callbacks=[early_stopping], validation_data=val_ds) """ Explanation: Train on the oversampled data Now try training the model with the resampled data set instead of using class weights to see how these methods compare. Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps. End of explanation """ plot_metrics(resampled_history) """ Explanation: If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting. But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight. This smoother gradient signal makes it easier to train the model. Check training history Note that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data. End of explanation """ resampled_model = make_model() resampled_model.load_weights(initial_weights) # Reset the bias to zero, since this dataset is balanced. output_layer = resampled_model.layers[-1] output_layer.bias.assign([0]) resampled_history = resampled_model.fit( resampled_ds, # These are not real epochs steps_per_epoch=20, epochs=10*EPOCHS, callbacks=[early_stopping], validation_data=(val_ds)) """ Explanation: Re-train Because training is easier on the balanced data, the above training procedure may overfit quickly. So break up the epochs to give the tf.keras.callbacks.EarlyStopping finer control over when to stop training. End of explanation """ plot_metrics(resampled_history) """ Explanation: Re-check training history End of explanation """ train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE) test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE) resampled_results = resampled_model.evaluate(test_features, test_labels, batch_size=BATCH_SIZE, verbose=0) for name, value in zip(resampled_model.metrics_names, resampled_results): print(name, ': ', value) print() plot_cm(test_labels, test_predictions_resampled) """ Explanation: Evaluate metrics End of explanation """ plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0]) plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--') plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1]) plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--') plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2]) plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--') plt.legend(loc='lower right') """ Explanation: Plot the ROC End of explanation """ plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0]) plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--') plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1]) plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--') plot_prc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2]) plot_prc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--') plt.legend(loc='lower right') """ Explanation: Plot the AUPRC End of explanation """
Kaggle/learntools
notebooks/pandas/raw/tut_4.ipynb
apache-2.0
#$HIDE_INPUT$ import pandas as pd reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0) pd.set_option('max_rows', 5) reviews.price.dtype """ Explanation: Introduction In this tutorial, you'll learn how to investigate data types within a DataFrame or Series. You'll also learn how to find and replace entries. To start the exercise for this topic, please click here. Dtypes The data type for a column in a DataFrame or a Series is known as the dtype. You can use the dtype property to grab the type of a specific column. For instance, we can get the dtype of the price column in the reviews DataFrame: End of explanation """ reviews.dtypes """ Explanation: Alternatively, the dtypes property returns the dtype of every column in the DataFrame: End of explanation """ reviews.points.astype('float64') """ Explanation: Data types tell us something about how pandas is storing the data internally. float64 means that it's using a 64-bit floating point number; int64 means a similarly sized integer instead, and so on. One peculiarity to keep in mind (and on display very clearly here) is that columns consisting entirely of strings do not get their own type; they are instead given the object type. It's possible to convert a column of one type into another wherever such a conversion makes sense by using the astype() function. For example, we may transform the points column from its existing int64 data type into a float64 data type: End of explanation """ reviews.index.dtype """ Explanation: A DataFrame or Series index has its own dtype, too: End of explanation """ reviews[pd.isnull(reviews.country)] """ Explanation: Pandas also supports more exotic data types, such as categorical data and timeseries data. Because these data types are more rarely used, we will omit them until a much later section of this tutorial. Missing data Entries missing values are given the value NaN, short for "Not a Number". For technical reasons these NaN values are always of the float64 dtype. Pandas provides some methods specific to missing data. To select NaN entries you can use pd.isnull() (or its companion pd.notnull()). This is meant to be used thusly: End of explanation """ reviews.region_2.fillna("Unknown") """ Explanation: Replacing missing values is a common operation. Pandas provides a really handy method for this problem: fillna(). fillna() provides a few different strategies for mitigating such data. For example, we can simply replace each NaN with an "Unknown": End of explanation """ reviews.taster_twitter_handle.replace("@kerinokeefe", "@kerino") """ Explanation: Or we could fill each missing value with the first non-null value that appears sometime after the given record in the database. This is known as the backfill strategy. Alternatively, we may have a non-null value that we would like to replace. For example, suppose that since this dataset was published, reviewer Kerin O'Keefe has changed her Twitter handle from @kerinokeefe to @kerino. One way to reflect this in the dataset is using the replace() method: End of explanation """
pagutierrez/tutorial-sklearn
notebooks-spanish/12-caso_estudio_deteccion_spam_SMS.ipynb
cc0-1.0
import os with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f: lines = [line.strip().split("\t") for line in f.readlines()] text = [x[1] for x in lines] y = [int(x[0] == "spam") for x in lines] text[:10] y[:10] print('Número de mensajes de ham/spam:', np.bincount(y)) type(text) type(y) """ Explanation: Caso de estudio - Clasificación de texto para detección de spam en SMS Primero vamos a cargar los datos textuales del directorio dataset que debería estar en nuestra directorio de cuadernos. Este directorio se creó al ejecutar el script fetch_data.py desde la carpeta de nivel superior del repositorio github. Además, aplicamos un preprocesamiento simple y dividimos el array de datos en dos partes: 1. text: una lista de listas, donde cada sublista representa el contenido de nuestros sms. 2. y: etiqueta SPAM vs HAM en binario, los 1 son mensajes de spam mientras que los 0 son mensajes ham (no spam). End of explanation """ from sklearn.model_selection import train_test_split text_train, text_test, y_train, y_test = train_test_split(text, y, random_state=42, test_size=0.25, stratify=y) """ Explanation: Ahora dividimos nuestro dataset en dos partes, una de entrenamiento y otra de test: End of explanation """ from sklearn.feature_extraction.text import CountVectorizer print('CountVectorizer parámetros por defecto') CountVectorizer() vectorizer = CountVectorizer() vectorizer.fit(text_train) # Ojo, el fit se aplica sobre train X_train = vectorizer.transform(text_train) X_test = vectorizer.transform(text_test) print(len(vectorizer.vocabulary_)) X_train.shape print(vectorizer.get_feature_names()[:20]) print(vectorizer.get_feature_names()[2000:2020]) print(X_train.shape) print(X_test.shape) """ Explanation: Pasamos a usar CountVectorizer para convertir el texto a un modelo bag-of-words: End of explanation """ from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf clf.fit(X_train, y_train) """ Explanation: Entrenar un clasificador para texto Ahora vamos a entrenar un clasificador, la regresión logística, que funciona muy bien como base para tareas de clasificación de textos: End of explanation """ clf.score(X_test, y_test) """ Explanation: Evaluamos el rendimiento del clasificador en el conjunto de test. Vamos a utilizar la función de score por defecto, que sería el porcentaje de patrones bien clasificados: End of explanation """ clf.score(X_train, y_train) """ Explanation: También podemos calcular la puntuación en entrenamiento: End of explanation """ def visualize_coefficients(classifier, feature_names, n_top_features=25): # Obtener los coeficientes más importantes (negativos o positivos) coef = classifier.coef_.ravel() positive_coefficients = np.argsort(coef)[-n_top_features:] negative_coefficients = np.argsort(coef)[:n_top_features] interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients]) # representarlos plt.figure(figsize=(15, 5)) colors = ["red" if c < 0 else "blue" for c in coef[interesting_coefficients]] plt.bar(np.arange(2 * n_top_features), coef[interesting_coefficients], color=colors) feature_names = np.array(feature_names) plt.xticks(np.arange(1, 2 * n_top_features+1), feature_names[interesting_coefficients], rotation=60, ha="right"); visualize_coefficients(clf, vectorizer.get_feature_names()) vectorizer = CountVectorizer(min_df=2) vectorizer.fit(text_train) X_train = vectorizer.transform(text_train) X_test = vectorizer.transform(text_test) clf = LogisticRegression() clf.fit(X_train, y_train) print(clf.score(X_train, y_train)) print(clf.score(X_test, y_test)) len(vectorizer.get_feature_names()) print(vectorizer.get_feature_names()[:20]) visualize_coefficients(clf, vectorizer.get_feature_names()) """ Explanation: Visualizar las características más importantes End of explanation """
ntoll/poem-o-matic
poem-o-matic.ipynb
mit
from os import listdir from os.path import isfile, join mypath = 'sources' filenames = [join(mypath, f) for f in listdir(mypath) if isfile(join(mypath, f))] print(filenames) """ Explanation: Poem-O-Matic This is a description, in both code and prose, of how to generate original poetry on demand using a computer and the Python programming language. It's based upon work done in the London Python Code Dojo with Dan Pope and Hans Bolang. I've taken some of our original ideas and run with them, specifically: If you re-assemble unrelated lines from different poems into a new poetic structure, you get a pretty convincing __new__ poem. This is an exercise in doing the simplest possible thing with a program to fool humans into thinking the computer can write poetry. There are two reasons for this: Simple solutions are easy to understand and think about. Simple solutions work well in an educational context. To be blunt: we're going to use software to automate a sneaky way to create poems. The basic process is simple: Take a huge number of existing source poems (written by humans) and chop them up into their constituent lines. These thousands of lines will be our source material. Work out how the source lines rhyme and group them together into "buckets" containing lines that rhyme with each other. Further categorise the rhymes in each bucket by word ending. For example, sub-categorise the bucket that rhymes with "uck" into slots for "look", "book", "suck" etc. Specify a rhyming scheme. For example "aabba" means lines one, two and five (the "a"s) rhyme with each other, as do lines three and four (the "b"s). Use the rhyming scheme to randomly select a bucket for each letter (for example, one bucket for the "a"s and yet another bucket for the "b"s) and randomly select a line from different word endings for each line in the rhyming scheme. Here's a practical example of this process in plain English: Consider the following three poems I just made up: ``` Poem 1 This is a special poem, The words, they are a flowing. It almost seems quite pointless, Since this poem is meaningless. Poem 2 Oh, my keyboard is on fire, Causing consternation and ire. Since words are cheap and cheerful, It's going to be quite an earful. Poem 3 Words are relentless, They light up minds like fire. Causing us to express, Ideas that flow like quagmire. ``` The rhyming schemes for each poem are as follows: Poem 1: aabb Poem 2: aabb Poem 3: abab If we cut up the poems into their constituent lines we get: This is a special poem, It almost seems quite pointless, They light up minds like fire. Since this poem is meaningless. Causing consternation and ire. It's going to be quite an earful. Oh, my keyboard is on fire, Words are relentless, Since words are cheap and cheerful, Causing us to express, The words, they are a flowing. Ideas that flow like quagmire. If we bucket them by rhymes we get the following four groups: ``` The words, they are a flowing. This is a special poem, It almost seems quite pointless, Words are relentless, Since this poem is meaningless. Causing us to express, They light up minds like fire. Oh, my keyboard is on fire, Causing consternation and ire. Ideas that flow like quagmire. It's going to be quite an earful. Since words are cheap and cheerful, ``` We can further refine the buckets into sub-categories based on word endings: ``` FLOWING: The words, they are a flowing. POEM: This is a special poem, POINTLESS: It almost seems quite pointless, RELENTLESS: Words are relentless, MEANINGLESS: Since this poem is meaningless. EXPRESS: Causing us to express, FIRE: They light up minds like fire. Oh, my keyboard is on fire, IRE: Causing consternation and ire. QUAGMIRE: Ideas that flow like quagmire. EARFUL: It's going to be quite an earful. CHEERFUL: Since words are cheap and cheerful, ``` Notice how all but one of the subcategories contain a single line. This is simply because our source poems are limited in number and length. In the programmed example below we'll be working with tens of thousands of lines of poetry. To generate a new poem we specify a rhyming scheme for the new poem, for example: aabba. This tells us we need three "a" lines that rhyme with each other and two "b" lines that rhyme with each other. In other words we need two buckets of rhyming lines - from one we'll select three lines, from the other two lines. Given the list above I'll randomly pick the second and third buckets. Given that I don't want to repeat word endings I'll make sure I randomly choose lines from each bucket from a different word-ending subcategory. In the end I get the following lines: ``` Oh, my keyboard is on fire, Causing consternation and ire. Ideas that flow like quagmire. It almost seems quite pointless, Since this poem is meaningless. ``` If I arrange the lines into the aabba rhyming scheme I end up with the finished poem: Oh, my keyboard is on fire, Causing consternation and ire. It almost seems quite pointless, Since this poem is meaningless. Ideas that flow like quagmire. Given such a simple technique, the result is both interesting, meaningful and (almost) poetic. As already mentioned, the important "poetic sounding" langauge is created by real poets - we're just going to use a Python program to reassemble lines from these poems to make new poetry. Where can we get such free source material..? Easy, the wonderful resource that is Project Gutenberg. I've selected the following anthologies as the source material for this project: The Sonnets by Shakespeare The World's Best Poetry, Volume 04: The Higher Life by Gladden and Carman Leaves of Grass by Walt Whitman A Book of Nonsense by Edward Lear The Golden Treasury by Francis Turner Palgrave and Alfred Pearse A Child's Garden of Verses by Robert Louis Stevenson The Peter Patter Book of Nursery Rhymes by Leroy F. Jackson The Aeneid by Virgil Songs of Childhood by Walter De la Mare Poems Chiefly from Manuscript by John Clare A Treasury of War Poetry: British and American Poems of the World War 1914-1917 I've put plain text versions of these works in the sources directory, and manually removed the prose elements of these files (introductions, titles, author's names etc). Consuming Source Poetry First, we need to get a list of all the source files: End of explanation """ LINES_OF_POETRY = set() # All our lines will be added to this set. for source_file in filenames: # For each source file... with open(source_file) as source: # Open it as the object 'source' for line in source.readlines(): # Now, for each line in the new 'source' object, clean_line = line.strip() # remove all the leading and trailing whitespace from the line, clean_line += '\n' # re-add a newline character, LINES_OF_POETRY.add(clean_line) # and add it to the set of all lines of poetry print('We have {} unique lines of poetry.'.format(len(LINES_OF_POETRY))) """ Explanation: Next, we need to load each file and extract the lines of poetry into a set of all known lines of poetry: End of explanation """ # Load the phoneme table with open('cmudict.0.7a.phones') as phoneme_definitions: PHONEMES = dict(line.split() for line in phoneme_definitions.readlines()) print(PHONEMES) """ Explanation: Cleaning and Transforming the Data In order to re-combine these lines into new poems we need to work out how the lines relate to each other in terms of rhyming. To do this we need to know about phonemes - the sounds that make up speech. The cmudict.0.7a.phones file contains definitions and categorisations (vowel, frictive, etc) of the phonemes used in English: End of explanation """ def is_vowel(phoneme): """ A utility function to determine if a phoneme is a vowel. """ return PHONEMES.get(phoneme) == 'vowel' """ Explanation: Next, we create a simple function to determine if a phoneme is a vowel: End of explanation """ # Create a rhyming definition dictionary with open('cmudict.0.7a') as pronunciation_definitions: # Load the CMU phoneme definitions of pronunciation. PRONUNCIATIONS = pronunciation_definitions.readlines() print(PRONUNCIATIONS[80:90]) """ Explanation: The cmudict.0.7a file contains a mapping of spelled words to pronunciations expressed as phonemes: End of explanation """ import re RHYME_DICTIONARY = {} for pronunciation in PRONUNCIATIONS: # For each pronunciation in the list of pronunciations, pronunciation = re.sub(r'\d', '', pronunciation) # strip phomeme stresses in the definition (not interesting to us), tokens = pronunciation.strip().split() # get the tokens that define the pronunciation, word = tokens[0] # the word whose pronunciation is defined is always in position zero of the listed tokens, phonemes = tokens[:0:-1] # the phonemes that define the pronunciation are the rest of the tokens. We reverse these! phonemes_to_rhyme = [] # This will hold the phonemes we use to rhyme words. for phoneme in phonemes: phonemes_to_rhyme.append(phoneme) if is_vowel(phoneme): break # We only need to rhyme from the last phoneme to the final vowel. Remember the phonemes are reversed! RHYME_DICTIONARY[word] = tuple(phonemes_to_rhyme) print('There are {} items in the rhyme dictionary.'.format(len(RHYME_DICTIONARY))) """ Explanation: We're in a position to create a rhyme dictionary we can use to look up words and discover rhymes. End of explanation """ def last_word(line): """ Return the last word in a line (stripping punctuation). Raise ValueError if the last word cannot be identified. """ match_for_last_word = re.search(r"([\w']+)\W*$", line) if match_for_last_word: word = match_for_last_word.group(1) word = re.sub(r"'d$", 'ed', word) # expand old english contraction of -ed return word.upper() raise ValueError("No word in line.") """ Explanation: Given that we're rhyming the last word of each line, we need a function to identify what the last word of any given line actually is: End of explanation """ from collections import defaultdict lines_by_rhyme = defaultdict(list) for line in LINES_OF_POETRY: try: rhyme = RHYME_DICTIONARY[last_word(line)] except (KeyError, ValueError): continue lines_by_rhyme[rhyme].append(line) LINES_THAT_RHYME = [l for l in lines_by_rhyme.values() if len(l) > 1] print("Number of rhymes found is: {}".format(len(LINES_THAT_RHYME))) """ Explanation: The next step is to collect all the lines from our source poems into lines that all rhyme. End of explanation """ RHYME_DATA = [] for lines in LINES_THAT_RHYME: lines_by_word = defaultdict(list) for line in lines: end_word = last_word(line) lines_by_word[end_word].append(line) RHYME_DATA.append(dict(lines_by_word)) print(RHYME_DATA[1:3]) """ Explanation: The final transformation of the data is to group the individual rhymes into ending words (so all the lines that end in "look", "nook" and "book" are collected together, for example). This well help us avoid rhyming lines with the same word. End of explanation """ def terminate_poem(poem): """ Given a list of poem lines, fix the punctuation of the last line. Removes any non-word characters and substitutes a random sentence terminator - ., ! or ?. """ last = re.sub(r'\W*$', '', poem[-1]) punc = random.choice(['!', '.', '.', '.', '.', '?', '...']) return poem[:-1] + [last + punc] """ Explanation: Generating Poetry Given the data found in RHYME_DATA we're finally in a position to reassemble rhyming lines from our source poems to make new poetry. It's important to make sure that, no matter the content of the final line, we ensure it ends with the correct punctuation. So we make a function to do this for us: End of explanation """ import random from collections import Counter def build_poem(rhyme_scheme="aabba", rhymes=RHYME_DATA): """ Build a poem given a rhyme scheme. """ groups = Counter(rhyme_scheme) # Work out how many lines of each sort of rhyming group are needed lines = {} # Will hold lines for given rhyming groups. for name, number in groups.items(): candidate = random.choice([r for r in rhymes if len(r) >= number]) # Select candidate rhymes with enough lines. word_ends = list(candidate.keys()) # Get the candidate rhyming words. random.shuffle(word_ends) # Randomly shuffle them. lines_to_use = [] # Will hold the lines selected to use in the final poem for this given rhyming group. for i in range(number): # For all the needed number of lines for this rhyming group, lines_to_use.append(random.choice(candidate[word_ends.pop()])) # Randomly select a line for a new word end. lines[name] = lines_to_use # Add the lines for the rhyming group to the available lines. # Given a selection of lines, we need to order them into the specified rhyming scheme. poem = [] # To hold the ordered list of lines for the new poem. for k in rhyme_scheme: # For each rhyming group name specification for a line... poem.append(lines[k].pop()) # Simply take a line from the specified rhyming group. return terminate_poem(poem) # Return the result as a list with the final line appropriately punctuated. """ Explanation: We also need to be able to define a rhyme scheme. For example, "aabba" means the first, second and fifth lines all rhyme (a) and the third and fourth lines rhyme (b). We could, of course write other schemes such as: "aabbaaccaa". Nevertheless, the "aabba" scheme is a safe default. End of explanation """ my_poem = build_poem() # Get an ordered list of the lines encompassing my new poem. poem = ''.join(my_poem) # Turn them into a single printable string. print(poem) """ Explanation: Finally, we can call the build_poem function to get a list of the lines for our new poem. End of explanation """ my_poem = build_poem('aabbaaccaa') poem = ''.join(my_poem) print(poem) """ Explanation: Example output: Breake ill eggs ere they be hatched: The flower in ripen'd bloom unmatch'd Sparrows fighting on the thatch. And where hens lay, and when the duck will hatch. Though by no hand untimely snatch'd... You could also change the rhyming scheme too: End of explanation """
TomAugspurger/engarde
examples/Basics.ipynb
mit
# This will take a few minutes r = requests.get("http://www.transtats.bts.gov/Download/On_Time_On_Time_Performance_2015_1.zip", stream=True) with open("otp-1.zip", "wb") as f: for chunk in r.iter_content(chunk_size=1024): f.write(chunk) f.flush() r.close() z = zipfile.ZipFile("otp-1.zip") fp = z.extract('On_Time_On_Time_Performance_2015_1.csv') columns = ['FlightDate', 'Carrier', 'TailNum', 'FlightNum', 'Origin', 'OriginCityName', 'OriginStateName', 'Dest', 'DestCityName', 'DestStateName', 'DepTime', 'DepDelay', 'TaxiOut', 'WheelsOn', 'WheelsOn', 'TaxiIn', 'ArrTime', 'ArrDelay', 'Cancelled', 'Diverted', 'ActualElapsedTime', 'AirTime', 'Distance', 'CarrierDelay', 'WeatherDelay', 'NASDelay', 'SecurityDelay', 'LateAircraftDelay'] df = pd.read_csv('On_Time_On_Time_Performance_2015_1.csv', usecols=columns, dtype={'DepTime': str}) dep_time = df.DepTime.fillna('').str.pad(4, side='left', fillchar='0') df['ts'] = pd.to_datetime(df.FlightDate + 'T' + dep_time, format='%Y-%m-%dT%H%M%S') df = df.drop(['FlightDate', 'DepTime'], axis=1) """ Explanation: Just a couple quick examples. End of explanation """ carriers = ['AA', 'AS', 'B6', 'DL', 'US', 'VX', 'WN', 'UA', 'NK', 'MQ', 'OO', 'EV', 'HA', 'F9'] df.pipe(ck.within_set, items={'Carrier': carriers}).Carrier.value_counts().head() """ Explanation: Let's suppose that down the road our probram can only handle certain carriers; an update to the data adding a new carrier would violate an assumpetion we hold. We'll use the within_set method to check our assumption End of explanation """ df.pipe(ck.none_missing, columns=['Carrier', 'TailNum', 'FlightNum']) """ Explanation: Great, our assumption was true (at least for now). Surely, we can't count on each flight having a Carrier, TailNum and FlightNum, right? End of explanation """ import engarde.decorators as ed @ed.within_range({'Counts!': (4000, 110000)}) @ed.within_n_std(3) def pretty_counts(df): return df.Carrier.value_counts().to_frame(name='Counts!') pretty_counts(df) """ Explanation: Note: this isn't too user-friendly yet. I'm planning to make the error messages more informative. Just haven't gotten there yet. That said, you wouldn't really use engarde to determine whether or not those columns are always not null. Instead, you might find that for January every flight has all of those fields, assume that hold generally, only to be surprised when next month you get a flight without a tail number. Decorator Interface Each of your checks can also be written as a decorator on a function that returns a DataFrame. I really like how slick this is. Let's do a nonsense example. Suppose we want to show the counts of each Carrier, but our UI designer worries that if things are too spread out the bar graph will look weird (again, silly example). We'll assert that teh counts are within a comfortable range and that each count is within 3 standard deviations of the mean. End of explanation """
Stargator/gregreda-jekylified
content/notebooks/cohort-analysis.ipynb
mit
df['OrderPeriod'] = df.OrderDate.apply(lambda x: x.strftime('%Y-%m')) df.head() """ Explanation: 1. Create a period column based on the OrderDate Since we're doing monthly cohorts, we'll be looking at the total monthly behavior of our users. Therefore, we don't want granular OrderDate data (right now). End of explanation """ df.set_index('UserId', inplace=True) df['CohortGroup'] = df.groupby(level=0)['OrderDate'].min().apply(lambda x: x.strftime('%Y-%m')) df.reset_index(inplace=True) df.head() """ Explanation: 2. Determine the user's cohort group (based on their first order) Create a new column called CohortGroup, which is the year and month in which the user's first purchase occurred. End of explanation """ grouped = df.groupby(['CohortGroup', 'OrderPeriod']) # count the unique users, orders, and total revenue per Group + Period cohorts = grouped.agg({'UserId': pd.Series.nunique, 'OrderId': pd.Series.nunique, 'TotalCharges': np.sum}) # make the column names more meaningful cohorts.rename(columns={'UserId': 'TotalUsers', 'OrderId': 'TotalOrders'}, inplace=True) cohorts.head() """ Explanation: 3. Rollup data by CohortGroup & OrderPeriod Since we're looking at monthly cohorts, we need to aggregate users, orders, and amount spent by the CohortGroup within the month (OrderPeriod). End of explanation """ def cohort_period(df): """ Creates a `CohortPeriod` column, which is the Nth period based on the user's first purchase. Example ------- Say you want to get the 3rd month for every user: df.sort(['UserId', 'OrderTime', inplace=True) df = df.groupby('UserId').apply(cohort_period) df[df.CohortPeriod == 3] """ df['CohortPeriod'] = np.arange(len(df)) + 1 return df cohorts = cohorts.groupby(level=0).apply(cohort_period) cohorts.head() """ Explanation: 4. Label the CohortPeriod for each CohortGroup We want to look at how each cohort has behaved in the months following their first purchase, so we'll need to index each cohort to their first purchase month. For example, CohortPeriod = 1 will be the cohort's first month, CohortPeriod = 2 is their second, and so on. This allows us to compare cohorts across various stages of their lifetime. End of explanation """ x = df[(df.CohortGroup == '2009-01') & (df.OrderPeriod == '2009-01')] y = cohorts.ix[('2009-01', '2009-01')] assert(x['UserId'].nunique() == y['TotalUsers']) assert(x['TotalCharges'].sum().round(2) == y['TotalCharges'].round(2)) assert(x['OrderId'].nunique() == y['TotalOrders']) x = df[(df.CohortGroup == '2009-01') & (df.OrderPeriod == '2009-09')] y = cohorts.ix[('2009-01', '2009-09')] assert(x['UserId'].nunique() == y['TotalUsers']) assert(x['TotalCharges'].sum().round(2) == y['TotalCharges'].round(2)) assert(x['OrderId'].nunique() == y['TotalOrders']) x = df[(df.CohortGroup == '2009-05') & (df.OrderPeriod == '2009-09')] y = cohorts.ix[('2009-05', '2009-09')] assert(x['UserId'].nunique() == y['TotalUsers']) assert(x['TotalCharges'].sum().round(2) == y['TotalCharges'].round(2)) assert(x['OrderId'].nunique() == y['TotalOrders']) """ Explanation: 5. Make sure we did all that right Let's test data points from the original DataFrame with their corresponding values in the new cohorts DataFrame to make sure all our data transformations worked as expected. As long as none of these raise an exception, we're good. End of explanation """ # reindex the DataFrame cohorts.reset_index(inplace=True) cohorts.set_index(['CohortGroup', 'CohortPeriod'], inplace=True) # create a Series holding the total size of each CohortGroup cohort_group_size = cohorts['TotalUsers'].groupby(level=0).first() cohort_group_size.head() """ Explanation: User Retention by Cohort Group We want to look at the percentage change of each CohortGroup over time -- not the absolute change. To do this, we'll first need to create a pandas Series containing each CohortGroup and its size. End of explanation """ cohorts['TotalUsers'].head() """ Explanation: Now, we'll need to divide the TotalUsers values in cohorts by cohort_group_size. Since DataFrame operations are performed based on the indices of the objects, we'll use unstack on our cohorts DataFrame to create a matrix where each column represents a CohortGroup and each row is the CohortPeriod corresponding to that group. To illustrate what unstack does, recall the first five TotalUsers values: End of explanation """ cohorts['TotalUsers'].unstack(0).head() """ Explanation: And here's what they look like when we unstack the CohortGroup level from the index: End of explanation """ user_retention = cohorts['TotalUsers'].unstack(0).divide(cohort_group_size, axis=1) user_retention.head(10) """ Explanation: Now, we can utilize broadcasting to divide each column by the corresponding cohort_group_size. The resulting DataFrame, user_retention, contains the percentage of users from the cohort purchasing within the given period. For instance, 38.4% of users in the 2009-03 purchased again in month 3 (which would be May 2009). End of explanation """ user_retention[['2009-06', '2009-07', '2009-08']].plot(figsize=(10,5)) plt.title('Cohorts: User Retention') plt.xticks(np.arange(1, 12.1, 1)) plt.xlim(1, 12) plt.ylabel('% of Cohort Purchasing'); # Creating heatmaps in matplotlib is more difficult than it should be. # Thankfully, Seaborn makes them easy for us. # http://stanford.edu/~mwaskom/software/seaborn/ import seaborn as sns sns.set(style='white') plt.figure(figsize=(12, 8)) plt.title('Cohorts: User Retention') sns.heatmap(user_retention.T, mask=user_retention.T.isnull(), annot=True, fmt='.0%'); """ Explanation: Finally, we can plot the cohorts over time in an effort to spot behavioral differences or similarities. Two common cohort charts are line graphs and heatmaps, both of which are shown below. Notice that the first period of each cohort is 100% -- this is because our cohorts are based on each user's first purchase, meaning everyone in the cohort purchased in month 1. End of explanation """
guruucsd/EigenfaceDemo
python/Neural Network Tricks.ipynb
mit
%pycat neural_network.py from sklearn.decomposition import PCA from sklearn.cross_validation import train_test_split, ShuffleSplit from sklearn.preprocessing import OneHotEncoder from neural_network import NeuralNetwork # The classifier network class ClassifierNetwork(NeuralNetwork): """Neural network with classification error plots.""" def errors_for(self, t, x): x, t = self.preprocessed(x, t) y = self.predictions_for(x) mse = multiply(y-t,y-t).mean() mce = (y.argmax(axis=1) != t.argmax(axis=1)).mean() return mse, mce def train_classifier(self, dataset, fig=None, ax=None, epochs=1000): """Perform the classification task for the data using the given network, without using train-test split.""" X, T = dataset.data, dataset.target _X, _T = self.preprocessed(X, T) errors=[] for epoch in range(epochs): self.update_weights(_T, _X) errors.append(self.errors_for(T, X)) if fig is not None and mod(epoch+1, 100) == 0: aerrors=array(errors).T self.plot_errors(ax, aerrors.T, epoch, epochs, ylabel='Errors', ylim=3.0) ax.legend(['RMSE', 'RMCE'], loc='ba') clear_output(wait=True) display(fig) ax.cla() if errors[-1][1] == 0: # Perfect classification break plt.close() return errors[-1] def plot_errors(self, ax, errors, epoch, epochs, ylabel, ylim=1.0): """Plots the error graph.""" ax.plot(arange(epoch), errors[:epoch]) ax.set_xlim([0, epochs]) ax.set_ylim([0, ylim]) ax.set_xlabel("Training epoch") ax.set_ylabel(ylabel) ax.set_title(ylabel) ax.grid() ax.legend(['Training', 'Test'], loc="best") class ClassifierNetworkWithOneHot(ClassifierNetwork): """Encodes target values using one-hot encoding.""" def preprocessed(self, X, T=None): if T is not None: if not hasattr(self, 'encoder'): self.encoder = OneHotEncoder(sparse=False).fit(T[:,newaxis]) T = self.encoder.transform(T[:,newaxis])*2 - 1 return super(ClassifierNetworkWithOneHot, self).preprocessed(X, T) # Classifier with PCA preprocessing class ClassifierNetworkForImages(ClassifierNetworkWithOneHot): """Applies PCA to the input data.""" def preprocessed(self, X, T=None): if not hasattr(self, 'pca'): self.pca = PCA(n_components = self.num_nodes[0], whiten=True, copy=True).fit(X) return super(ClassifierNetworkForImages, self).preprocessed(self.pca.transform(X),T) def train_classifier(self, dataset, fig=None, axs=None, epochs=1000, batch_size=0.1, test_size=0.2): """Perform the classification task for the data using the given network.""" # Split to training and test X_train, X_test, T_train, T_test = train_test_split(dataset.data, dataset.target, test_size=test_size) errors=[] for epoch, epochs in self.train(X_train, T_train, epochs=epochs, batch_size=batch_size): errors.append(self.errors_for(T_train, X_train) + self.errors_for(T_test, X_test)) if fig is not None and mod(epoch+1, 100) == 0: aerrors=array(errors).T self.plot_errors(axs[0], aerrors[::2].T, epoch, epochs, ylabel='RMSE', ylim=3.0) self.plot_errors(axs[1], aerrors[1::2].T,epoch, epochs, ylabel='Classification Error', ylim=1.0) clear_output(wait=True) display(fig) [ax.cla() for ax in axs] plt.close() train_rmse, train_rce, test_rmse, test_rce = errors[-1] return train_rmse, test_rmse, train_rce, test_rce """ Explanation: <!--bibtex @incollection{LeCun:2012vf, author = {LeCun, Yann A and Bottou, Leon and Orr, Genevieve B and Muller, Klaus-Robert}, title = {{Efficient backprop}}, booktitle = {Neural Networks: tricks of the trade}, year = {2012}, pages = {9--48}, publisher = {Springer} } --> Neural Network Tricks In this notebook, I'll show the effects of various techniques ("tricks") used to improve the performance of neural networks. Most of them come from the <a name="ref-1"/>(LeCun, Bottou, Orr and Muller, 2012) paper. Previously, we built a basic neural network in the "Backprop Exercise" notebook. Here, I'll use a slightly refactored version of the NeuralNetwork class: End of explanation """ from sklearn.datasets.base import Bunch from sklearn.datasets import load_digits # The XOR dataset dataset_xor = Bunch() dataset_xor['data'] = array([ [ 1,-1], [-1, 1], [ 1, 1], [-1,-1]], dtype=float) dataset_xor['target'] = array([ 1, 1, 0, 0], dtype=float) dataset_digits=load_digits() """ Explanation: Here are some datasets we'll be using: End of explanation """ base_xor_net = ClassifierNetworkWithOneHot(num_nodes=[2, 2, 2]) print(base_xor_net.train_classifier(dataset_xor, *plt.subplots(figsize=(5,5)), epochs=500)) """ Explanation: Now let's see how the "basic" network does for these tasks. End of explanation """ base_digits_net = ClassifierNetworkForImages(num_nodes=[20, 20, 10]) print(base_digits_net.train_classifier(dataset_digits, *plt.subplots(1, 2, figsize=(10,5)), epochs=1000)) """ Explanation: Note that the network often gets stuck in a local minimum. End of explanation """ from neural_network import ActivationFunction class FunnyTanh(ActivationFunction): def apply(self, x): return 1.7159 * tanh(x*2/3) + 0.001 * x funnytanh_xor_net = ClassifierNetworkWithOneHot(num_nodes=[2, 2, 2], activation_function=FunnyTanh()) # Train 10 times and see how many times it gets stuck results=zeros((10,2)) epochs=300 for result in results: Ws = base_xor_net.initial_weights() # keep the same initial weights base_xor_net.Ws = Ws funnytanh_xor_net.Ws = Ws result[0]=base_xor_net.train_classifier(dataset_xor, epochs=epochs)[1] result[1]=funnytanh_xor_net.train_classifier(dataset_xor, epochs=epochs)[1] plt.bar(arange(2), (results<1e-8).mean(axis=0) * 100.0) plt.xticks(arange(2)+.5, ['base', 'funnytanh']) plt.xlim([-.25, 2.25]) plt.ylim([0, 100.0]) plt.grid() plt.title('Pct successful training by %d epochs (out of %d trials)'% (epochs, results.shape[0])) None """ Explanation: Activation function Let's try the "funny tanh" as the activation function. For the XOR dataset, this ameliorates the problem of the network getting stuck in the local minimum. End of explanation """ def better_initial_weights(self): return [standard_normal((n + 1, m)) / sqrt(n + 1) for n, m in zip(self.num_nodes[:-1], self.num_nodes[1:])] better_weight_xor_net = ClassifierNetworkWithOneHot(num_nodes=[2, 2, 2]) # Train 10 times and see how many times it gets stuck results=zeros((10,2)) epochs=300 for result in results: base_xor_net.Ws = base_xor_net.initial_weights() better_weight_xor_net.Ws = better_initial_weights(better_weight_xor_net) result[0]=base_xor_net.train_classifier(dataset_xor, epochs=epochs)[1] result[1]=better_weight_xor_net.train_classifier(dataset_xor, epochs=epochs)[1] plt.bar(arange(2), (results<1e-8).mean(axis=0) * 100.0) plt.xticks(arange(2)+.5, ['base', 'better_weight']) plt.xlim([-.25, 2.25]) plt.ylim([0, 100.0]) plt.grid() plt.title('Pct successful training by %d epochs (out of %d trials)'% (epochs, results.shape[0])) None """ Explanation: Better initial weights Let's change the initial random weights to have standard deviation of $1/\sqrt m$, where $m$ is the number of connection feeding into the node. End of explanation """ from neural_network import _with_bias class ClassifierNetworkWithMomentum(ClassifierNetworkWithOneHot): def __init__(self, *args, **kwargs): super(ClassifierNetworkWithMomentum, self).__init__(*args, **kwargs) self.momentum = kwargs['momentum'] if kwargs.has_key('momentum') else 0.9 self.Vs = [zeros(W.shape) for W in self.Ws] def gradient_descent(self, deltas, zs): N = zs[0].shape[0] Js= [self.eta * dot(_with_bias(z).T, delta) / N for W, z, delta in zip(self.Ws, zs[:-1], deltas)] self.Vs = [self.momentum * V - J for V, J in zip(self.Vs, Js)] return [W + V for W, V in zip(self.Vs, self.Ws)] momentum_xor_net = ClassifierNetworkWithMomentum(num_nodes=[2, 2, 2], eta=0.05) #print(momentum_xor_net.train_classifier(dataset_xor, *plt.subplots(figsize=(5,5)), epochs=500)) # Train 10 times and see how many times it gets stuck results=zeros((10,2)) epochs=600 for result in results: Ws = base_xor_net.initial_weights() base_xor_net.Ws = Ws momentum_xor_net.Ws = Ws result[0]=base_xor_net.train_classifier(dataset_xor, epochs=epochs)[1] result[1]=momentum_xor_net.train_classifier(dataset_xor, epochs=epochs)[1] plt.bar(arange(2), (results<1e-8).mean(axis=0) * 100.0) plt.xticks(arange(2)+.5, ['base', 'momentum']) plt.xlim([-.25, 2.25]) plt.ylim([0, 100.0]) plt.grid() plt.title('Pct successful training by %d epochs (out of %d trials)'% (epochs, results.shape[0])) None """ Explanation: Momentum Let's add the momentum to the stochastic gradient update. Now, instead of updating the weights as $$ W \leftarrow W - \eta \frac{\partial E}{\partial W} $$ We will keep the previous weight update $V$ and momentum $\mu$ so that: $$ \begin{align} V &\leftarrow \mu V - \eta \frac{\partial E}{\partial W} \ W &\leftarrow W + V \end{align} $$ The momentum $\mu$ modulates how much of the previous weight update is reflected in the current update. End of explanation """ class AutoEncoderNetwork(ClassifierNetwork): def train_unsupervised(self, dataset, fig=None, ax=None, epochs=1000): """Perform unsupervised learning from the data.""" X = self.preprocessed(dataset.data) T = X.copy() errors=[] for epoch in range(epochs): self.update_weights(T, X) errors.append(self.errors_for(T, X)) if fig is not None and mod(epoch+1, 100) == 0: aerrors=array(errors).T self.plot_errors(ax, aerrors.T, epoch, epochs, ylabel='Errors', ylim=3.0) ax.legend(['RMSE', '(Ignore this)'], loc='ba') clear_output(wait=True) display(fig) ax.cla() plt.close() return errors[-1] ae_xor_net = AutoEncoderNetwork(num_nodes=[2, 2, 2]) print(ae_xor_net.train_unsupervised(dataset_xor, *plt.subplots(figsize=(5,5)), epochs=500)) """ Explanation: Pre-train using autoencoder We'll first perform an "unsupervised learning" using an auto-encoder: instead of predicting the target values, we'll train it to predict the input values. End of explanation """ ae_xor_net = AutoEncoderNetwork(num_nodes=[2, 2, 2]) aeweight_xor_net = ClassifierNetworkWithOneHot(num_nodes=[2, 2, 2], activation_function=FunnyTanh()) # Train 10 times and see how many times it gets stuck results=zeros((10,2)) epochs=300 for result in results: Ws = base_xor_net.initial_weights() base_xor_net.Ws = Ws ae_xor_net.Ws = ae_xor_net.initial_weights() ae_xor_net.train_unsupervised(dataset_xor, epochs=100) # Only train for a short amount Wh = ae_xor_net.Ws[0] aeweight_xor_net.Ws = [W.copy() for W in Ws] aeweight_xor_net.Ws[0] = Wh result[0]=base_xor_net.train_classifier(dataset_xor, epochs=epochs)[1] result[1]=aeweight_xor_net.train_classifier(dataset_xor, epochs=epochs)[1] plt.bar(arange(2), (results<1e-8).mean(axis=0) * 100.0) plt.xticks(arange(2)+.5, ['base', 'autoencoder']) plt.xlim([-.25, 2.25]) plt.ylim([0, 100.0]) plt.grid() plt.title('Pct successful training by %d epochs (out of %d trials)'% (epochs, results.shape[0])) None """ Explanation: Then, we'd train the classifier network starting from the hidden weights that was learned. End of explanation """ class ClassifierNetwork2(ClassifierNetworkWithMomentum): def __init__(self, *args, **kwargs): if not kwargs.has_key('activation_function'): kwargs['activation_function'] = FunnyTanh() super(ClassifierNetwork2, self).__init__(*args, **kwargs) def initial_weights(self): Ws0 = [standard_normal((n + 1, m)) / sqrt(n + 1) for n, m in zip(self.num_nodes[:-1], self.num_nodes[1:])] ae_network = AutoEncoderNetwork(num_nodes=[self.num_nodes[0], self.num_nodes[1], self.num_nodes[0]],Ws = Ws0) ae_network.train_unsupervised(dataset_xor, epochs=100) # Only train for a short amount Ws0[0] = ae_network.Ws[0] return Ws0 improved_xor_net = ClassifierNetwork2(num_nodes=[2, 2, 2], eta=0.05) # Train 10 times and see how many times it gets stuck results=zeros((10,2)) epochs=600 for result in results: base_xor_net.Ws = base_xor_net.initial_weights() improved_xor_net.Ws = improved_xor_net.initial_weights() result[0]=base_xor_net.train_classifier(dataset_xor, epochs=epochs)[1] result[1]=improved_xor_net.train_classifier(dataset_xor, epochs=epochs)[1] plt.bar(arange(2), (results<1e-8).mean(axis=0) * 100.0) plt.xticks(arange(2)+.5, ['base', 'improved']) plt.xlim([-.25, 2.25]) plt.ylim([0, 100.0]) plt.grid() plt.title('Pct successful training by %d epochs (out of %d trials)'% (epochs, results.shape[0])) None """ Explanation: Putting everything together Now we'll combine all of the techniques above. We'll also run it longer (1000 epochs) to see if End of explanation """
taylorwood/Kaggle.HomeDepot
ProjectSearchRelevance.Python/Home Depot Product Search Relevance Features.ipynb
mit
import graphlab as gl """ Explanation: Home Depot Product Search Relevance The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters. LabGraph Create This notebook uses the LabGraph create machine learning iPython module. You need a personal licence to run this code. End of explanation """ train = gl.SFrame.read_csv("../data/train.csv") test = gl.SFrame.read_csv("../data/test.csv") desc = gl.SFrame.read_csv("../data/product_descriptions.csv") attr = gl.SFrame.read_csv("../data/attributes.csv") """ Explanation: Load data from CSV files End of explanation """ # merge train with description train = train.join(desc, on = 'product_uid', how = 'left') # merge test with description test = test.join(desc, on = 'product_uid', how = 'left') # if some attributes has no so we don't need them print len(attr) attr = attr[attr['value'] != "No"] print len(attr) # if some attributes has "yes" we compy the value so we can search in it attr['value'] = attr.apply(lambda x: x['name'] if x['value'] == "Yes" else x['value']) """ Explanation: Data merging and feature engineering End of explanation """ brands = attr[attr['name'] == "MFG Brand Name"] brands.head() """ Explanation: Let's select brands End of explanation """ bullets = attr[attr['name'].contains("Bullet")] # converting bullets to columns bullets = bullets.unstack(column = ['name', 'value'], new_column_name = "bullets") bullets = bullets.unpack("bullets") bullets = bullets.sort("product_uid") print len(bullets) # merge train with brands and bullets train = train.join(brands, on = 'product_uid', how = 'left') train = train.join(bullets, on = 'product_uid', how = 'left') # merge test with brands and bullets test = test.join(brands, on = 'product_uid', how = 'left') test = test.join(bullets, on = 'product_uid', how = 'left') """ Explanation: Bullets too End of explanation """ def calculateTfIdf(cols, data, searchColTfIdfName): for item in xrange(len(cols)): colName = cols[item] newColNameWordCount = colName + "_word_count" newColNameTfIdf = colName + "_tfidf" newColDistance = colName + "_distance" wordCount = gl.text_analytics.count_words(data[colName]) data[newColNameWordCount] = wordCount tfidf = gl.text_analytics.tf_idf(data[newColNameWordCount]) data[newColNameTfIdf] = tfidf #print colName if searchColTfIdfName != colName: data[newColDistance] = data.apply(lambda x: 0 if x[newColNameTfIdf] is None else gl.distances.cosine(x[searchColTfIdfName],x[newColNameTfIdf])) return data # columns = ['search_term', 'product_title', 'product_description', 'value', 'bullets.Bullet01', # 'bullets.Bullet02', 'bullets.Bullet03', 'bullets.Bullet04', 'bullets.Bullet05', 'bullets.Bullet06' # , 'bullets.Bullet07', 'bullets.Bullet08', 'bullets.Bullet09', 'bullets.Bullet10', 'bullets.Bullet11' # , 'bullets.Bullet12', 'bullets.Bullet13', 'bullets.Bullet14', 'bullets.Bullet15', 'bullets.Bullet16' # , 'bullets.Bullet17', 'bullets.Bullet18', 'bullets.Bullet19', 'bullets.Bullet20', 'bullets.Bullet21' # , 'bullets.Bullet22'] columns = ['search_term', 'product_title', 'product_description', 'value'] train = calculateTfIdf(columns, train, 'search_term_tfidf') test = calculateTfIdf(columns, test, 'search_term_tfidf') featuresDistance = [s for s in train.column_names() if "distance" in s] print featuresDistance #train = train.dropna('value_distance') model1 = gl.linear_regression.create(train, target = 'relevance', features = featuresDistance) #let's take a look at the weights before we plot model1.get("coefficients") ''' predictions_test = model1.predict(test) test_errors = predictions_test - test['relevance'] RSS_test = sum(test_errors * test_errors) print RSS_test ''' predictions_test = model1.predict(test) predictions_test submission = gl.SFrame(test['id']) submission.add_column(predictions_test) submission.rename({'X1': 'id', 'X2':'relevance'}) submission['relevance'] = submission.apply(lambda x: 3.0 if x['relevance'] > 3.0 else x['relevance']) submission['relevance'] = submission.apply(lambda x: 1.0 if x['relevance'] < 1.0 else x['relevance']) submission['relevance'] = submission.apply(lambda x: str(x['relevance'])) submission.export_csv('../data/submission.csv', quote_level = 3) #gl.canvas.set_target('ipynb') """ Explanation: TF-IDF with linear regression End of explanation """
karlstroetmann/Artificial-Intelligence
Python/Set.ipynb
gpl-2.0
class Set: def __init__(self): self.mKey = None self.mLeft = None self.mRight = None self.mHeight = 0 """ Explanation: Sets implemented as AVL Trees This notebook implements <em style="color:blue;">sets</em> as <a href="https://en.wikipedia.org/wiki/AVL_tree">AVL trees</a>. The set $\mathcal{A}$ of <em style="color:blue;">AVL trees</em> is defined inductively: $\texttt{Nil} \in \mathcal{A}$. $\texttt{Node}(k,l,r) \in \mathcal{A}\quad$ iff $\texttt{Node}(k,l,r) \in \mathcal{B}_<$, $l, r \in \mathcal{A}$, and $|l.\texttt{height}() - r.\texttt{height}()| \leq 1$. According to this definition, an AVL tree is an <em style="color:blue;">ordered binary tree</em> such that for every node $\texttt{Node}(k,l,r)$ in this tree the height of the left subtree $l$ and the right subtree $r$ differ at most by one. The class Set represents the nodes of an AVL tree. This class has the following member variables: mKey is the key stored at the root of the tree, mLeft is the left subtree, mRight is the right subtree, and mHeight is the height. The constructor __init__ creates the empty tree. End of explanation """ def isEmpty(self): return self.mKey is None Set.isEmpty = isEmpty Set.__bool__ = isEmpty def __bool__(self): return self.mKey is not None Set.__bool__ = __bool__ """ Explanation: Given an ordered binary tree $t$, the expression $t.\texttt{isEmpty}()$ checks whether $t$ is the empty tree. End of explanation """ def member(self, key): if self.isEmpty(): return elif self.mKey == key: return True elif key < self.mKey: return self.mLeft.member(key) else: return self.mRight.member(key) Set.member = member Set.__contains__ = member """ Explanation: Given an ordered binary tree $t$ and a key $k$, the expression $t.\texttt{member}(k)$ returns True if the key $k$ is stored in the tree $t$. The method member is defined inductively as follows: - $\texttt{Nil}.\texttt{member}(k) = \Omega$, because the empty tree is interpreted as the empty map. $\texttt{Node}(k, l, r).\texttt{member}(k) = v$, because the node $\texttt{Node}(k,l,r)$ stores the assignment $k \mapsto v$. - $k_1 < k_2 \rightarrow \texttt{Node}(k_2, l, r).\texttt{member}(k_1) = l.\texttt{member}(k_1)$, because if $k_1$ is less than $k_2$, then any mapping for $k_1$ has to be stored in the left subtree $l$. - $k_1 > k_2 \rightarrow \texttt{Node}(k_2, l, r).\texttt{member}(k_1) = r.\texttt{member}(k_1)$, because if $k_1$ is greater than $k_2$, then any mapping for $k_1$ has to be stored in the right subtree $r$. End of explanation """ def insert(self, key): if self.isEmpty(): self.mKey = key self.mLeft = Set() self.mRight = Set() self.mHeight = 1 elif self.mKey == key: pass elif key < self.mKey: self.mLeft.insert(key) self._restore() else: self.mRight.insert(key) self._restore() Set.insert = insert """ Explanation: The method $\texttt{insert}()$ is specified via recursive equations. - $\texttt{Nil}.\texttt{insert}(k) = \texttt{Node}(k, \texttt{Nil}, \texttt{Nil})$, - $\texttt{Node}(k, l, r).\texttt{insert}(k) = \texttt{Node}(k, l, r)$, - $k_1 < k_2 \rightarrow \texttt{Node}(k_2, l, r).\texttt{insert}(k_1) = \texttt{Node}\bigl(k_2, l.\texttt{insert}(k_1), r\bigr).\texttt{restore}()$, - $k_1 > k_2 \rightarrow \texttt{Node}(k_2, l, r).\texttt{insert}\bigl(k_1\bigr) = \texttt{Node}\bigl(k_2, l, r.\texttt{insert}(k_1)\bigr).\texttt{restore}()$. The function $\texttt{restore}$ is an auxiliary function that is defined below. This function restores the balancing condition if it is violated after an insertion. End of explanation """ def delete(self, key): if self.isEmpty(): return if key == self.mKey: if self.mLeft.isEmpty(): self._update(self.mRight) elif self.mRight.isEmpty(): self._update(self.mLeft) else: self.mRight, self.mKey = self.mRight._delMin() elif key < self.mKey: self.mLeft.delete(key) else: self.mRight.delete(key) Set.delete = delete """ Explanation: The method $\texttt{self}.\texttt{delete}(k)$ removes the key $k$ from the tree $\texttt{self}$. It is defined as follows: $\texttt{Nil}.\texttt{delete}(k) = \texttt{Nil}$, $\texttt{Node}(k,\texttt{Nil},r).\texttt{delete}(k) = r$, $\texttt{Node}(k,l,\texttt{Nil}).\texttt{delete}(k) = l$, $l \not= \texttt{Nil} \,\wedge\, r \not= \texttt{Nil} \,\wedge\, \langle r',k_{min} \rangle := r.\texttt{delMin}() \;\rightarrow\; \texttt{Node}(k,l,r).\texttt{delete}(k) = \texttt{Node}(k_{min},l,r')$ $k_1 < k_2 \rightarrow \texttt{Node}(k_2,l,r).\texttt{delete}(k_1) = \texttt{Node}\bigl(k_2,l.\texttt{delete}(k_1),r\bigr)$, $k_1 > k_2 \rightarrow \texttt{Node}(k_2,l,r).\texttt{delete}(k_1) = \texttt{Node}\bigl(k_2,l,r.\texttt{delete}(k_1)\bigr)$. End of explanation """ def _delMin(self): if self.mLeft.isEmpty(): return self.mRight, self.mKey else: ls, km = self.mLeft._delMin() self.mLeft = ls self._restore() return self, km Set._delMin = _delMin """ Explanation: The method $\texttt{self}.\texttt{delMin}()$ removes the smallest key from the given tree $\texttt{self}$ and returns a pair of the form $$ (\texttt{self}, k_m) $$ where $\texttt{self}$ is the tree that remains after removing the smallest key, while $k_m$ is the smallest key that has been found. The function is defined as follows: $\texttt{Node}(k, \texttt{Nil}, r).\texttt{delMin}() = \langle r, k \rangle$, $l\not= \texttt{Nil} \wedge \langle l',k_{min}\rangle := l.\texttt{delMin}() \;\rightarrow\; \texttt{Node}(k, l, r).\texttt{delMin}() = \langle \texttt{Node}(k, l', r).\texttt{restore}(), k_{min} \rangle $ End of explanation """ def _update(self, t): self.mKey = t.mKey self.mLeft = t.mLeft self.mRight = t.mRight self.mHeight = t.mHeight Set._update = _update """ Explanation: Given two ordered binary trees $s$ and $t$, the expression $s.\texttt{update}(t)$ overwrites the attributes of $s$ with the corresponding attributes of $t$. End of explanation """ def _restore(self): if abs(self.mLeft.mHeight - self.mRight.mHeight) <= 1: self._restoreHeight() return if self.mLeft.mHeight > self.mRight.mHeight: k1, l1, r1 = self.mKey, self.mLeft, self.mRight k2, l2, r2 = l1.mKey, l1.mLeft, l1.mRight if l2.mHeight >= r2.mHeight: self._setValues(k2, l2, createNode(k1, r2, r1)) else: k3, l3, r3 = r2.mKey, r2.mLeft, r2.mRight self._setValues(k3, createNode(k2, l2, l3), createNode(k1, r3, r1)) elif self.mRight.mHeight > self.mLeft.mHeight: k1, l1, r1 = self.mKey, self.mLeft, self.mRight k2, l2, r2 = r1.mKey, r1.mLeft, r1.mRight if r2.mHeight >= l2.mHeight: self._setValues(k2, createNode(k1, l1, l2), r2) else: k3, l3, r3 = l2.mKey, l2.mLeft, l2.mRight self._setValues(k3, createNode(k1, l1, l3), createNode(k2, r3, r2)) self._restoreHeight() Set._restore = _restore """ Explanation: The function $\texttt{restore}(\texttt{self})$ restores the balancing condition of the given binary tree at the root node and recompute the variable $\texttt{mHeight}$. The method $\texttt{restore}$ is specified via conditional equations. $\texttt{Nil}.\texttt{restore}() = \texttt{Nil}$, because the empty tree already is an AVL tree. - $|l.\texttt{height}() - r.\texttt{height}()| \leq 1 \rightarrow \texttt{Node}(k,l,r).\texttt{restore}() = \texttt{Node}(k,l,r)$. If the balancing condition is satisfied, then nothing needs to be done. - $\begin{array}[t]{cl} & l_1.\texttt{height}() = r_1.\texttt{height}() + 2 \ \wedge & l_1 = \texttt{Node}(k_2,l_2,r_2) \ \wedge & l_2.\texttt{height}() \geq r_2.\texttt{height}() \[0.2cm] \rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() = \texttt{Node}\bigl(k_2,l_2,\texttt{Node}(k_1,r_2,r_1)\bigr) \end{array} $ - $\begin{array}[t]{cl} & l_1.\texttt{height}() = r_1.\texttt{height}() + 2 \ \wedge & l_1 = \texttt{Node}(k_2,l_2,r_2) \ \wedge & l_2.\texttt{height}() < r_2.\texttt{height}() \ \wedge & r_2 = \texttt{Node}(k_3,l_3,r_3) \ \rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() = \texttt{Node}\bigl(k_3,\texttt{Node}(k_2,l_2,l_3),\texttt{Node}(k_1,r_3,r_1) \bigr) \end{array} $ - $\begin{array}[t]{cl} & r_1.\texttt{height}() = l_1.\texttt{height}() + 2 \ \wedge & r_1 = \texttt{Node}(k_2,l_2,r_2) \ \wedge & r_2.\texttt{height}() \geq l_2.\texttt{height}() \[0.2cm] \rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() = \texttt{Node}\bigl(k_2,\texttt{Node}(k_1,l_1,l_2),r_2\bigr) \end{array} $ - $\begin{array}[t]{cl} & r_1.\texttt{height}() = l_1.\texttt{height}() + 2 \ \wedge & r_1 = \texttt{Node}(k_2,l_2,r_2) \ \wedge & r_2.\texttt{height}() < l_2.\texttt{height}() \ \wedge & l_2 = \texttt{Node}(k_3,l_3,r_3) \ \rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() = \texttt{Node}\bigl(k_3,\texttt{Node}(k_1,l_1,l_3),\texttt{Node}(k_2,r_3,r_2) \bigr) \end{array} $ End of explanation """ def _setValues(self, k, l, r): self.mKey = k self.mLeft = l self.mRight = r Set._setValues = _setValues def _restoreHeight(self): self.mHeight = max(self.mLeft.mHeight, self.mRight.mHeight) + 1 Set._restoreHeight = _restoreHeight """ Explanation: The function $\texttt{self}.\texttt{_setValues}(k, l, r)$ overwrites the member variables of the node $\texttt{self}$ with the given values. End of explanation """ def createNode(key, left, right): node = Set() node.mKey = key node.mLeft = left node.mRight = right node.mHeight = max(left.mHeight, right.mHeight) + 1 return node """ Explanation: The function $\texttt{createNode}(k, l, r)$ creates an AVL-tree of that has the key $k$ stored at its root, left subtree $l$ and right subtree $r$. End of explanation """ def pop(self): if self.mKey == None: raise KeyError if self.mLeft.mKey == None: key = self.mKey self._update(self.mRight) return key return self.mLeft.pop() Set.pop = pop """ Explanation: The method $t.\texttt{pop}()$ take an AVL tree $t$ and removes and returns the smallest key that is present in $t$. It is specified as follows: - $\texttt{Nil}.\texttt{pop}() = \Omega$ - $\texttt{Node}(k,\texttt{Nil}, r).\texttt{pop}() = \langle k, r\rangle$ - $l \not=\texttt{Nil} \wedge \langle k',l'\rangle := l.\texttt{pop}() \rightarrow \texttt{Node}(k, l, r).\texttt{pop}() = \langle k', \texttt{Node}(k, l', r)\rangle$ End of explanation """ import graphviz as gv """ Explanation: Display Code End of explanation """ def toDot(self): Set.sNodeCount = 0 # this is a static variable of the class Set dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'}) NodeDict = {} self._assignIDs(NodeDict) for n, t in NodeDict.items(): if t.mKey != None: dot.node(str(n), label=str(t.mKey)) else: dot.node(str(n), label='', shape='point') for n, t in NodeDict.items(): if not t.mLeft == None: dot.edge(str(n), str(t.mLeft.mID)) if not t.mRight == None: dot.edge(str(n), str(t.mRight.mID)) return dot Set.toDot = toDot """ Explanation: Given an ordered binary tree, this function renders the tree graphically using graphviz. End of explanation """ def _assignIDs(self, NodeDict): Set.sNodeCount += 1 self.mID = Set.sNodeCount NodeDict[self.mID] = self if self.isEmpty(): return self.mLeft ._assignIDs(NodeDict) self.mRight._assignIDs(NodeDict) Set._assignIDs = _assignIDs """ Explanation: This method assigns a unique identifier with each node. The dictionary NodeDict maps these identifiers to the nodes where they occur. End of explanation """ def __len__(self): if self.isEmpty(): return 0 return 1 + len(self.mLeft) + len(self.mRight) Set.__len__ = __len__ """ Explanation: This method counts all nodes in the tree. End of explanation """ def demo(): m = Set() m.insert("anton") m.insert("hugo") m.insert("gustav") m.insert("jens") m.insert("hubert") m.insert("andre") m.insert("philipp") m.insert("rene") return m t = demo() t.toDot() while not t.isEmpty(): print(t.pop()) display(t.toDot()) """ Explanation: Testing The function $\texttt{demo}()$ creates a small ordered binary tree. End of explanation """ import random as rnd t = Set() for k in range(30): k = rnd.randrange(100) t.insert(k) display(t.toDot()) while not t.isEmpty(): print(t.pop(), end=' ') display(t.toDot()) """ Explanation: Let's generate an ordered binary tree with random keys. End of explanation """ t = Set() for k in range(30): t.insert(k) display(t.toDot()) while not t.isEmpty(): print(t.pop(), end=' ') display(t.toDot()) """ Explanation: This tree looks more or less balanced. Lets us try to create a tree by inserting sorted numbers because that resulted in linear complexity for ordered binary trees. End of explanation """ S = Set() for k in range(2, 101): S.insert(k) display(S.toDot()) for i in range(2, 101): for j in range(2, 101): S.delete(i * j) display(S.toDot()) while not S.isEmpty(): print(S.pop(), end=' ') display(S.toDot()) """ Explanation: Next, we compute the set of prime numbers $\leq 100$. Mathematically, this set is given as follows: $$ \bigl{2, \cdots, 100 \bigr} - \bigl{ i \cdot j \bigm| i, j \in {2, \cdots, 100 }\bigr}$$ End of explanation """
VVard0g/ThreatHunter-Playbook
docs/notebooks/windows/08_lateral_movement/WIN-200902020333.ipynb
mit
from openhunt.mordorutils import * spark = get_spark() """ Explanation: Remote WMI ActiveScriptEventConsumers Metadata | Metadata | Value | |:------------------|:---| | collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] | | creation date | 2020/09/02 | | modification date | 2020/09/20 | | playbook related | [] | Hypothesis Adversaries might be leveraging WMI ActiveScriptEventConsumers remotely to move laterally in my network. Technical Context One of the components of an Event subscription is the event consumer. It is basically the main action that gets executed when a filter triggers (i.e. monitor for authentication events. if one occurs. trigger the consumer). According to MS Documentation, there are several WMI consumer classes available ActiveScriptEventConsumer -> Executes a predefined script in an arbitrary scripting language when an event is delivered to it. Example -> Running a Script Based on an Event CommandLineEventConsumer -> Launches an arbitrary process in the local system context when an event is delivered to it. Example -> Running a Program from the Command Line Based on an Event LogFileEventConsumer -> Writes customized strings to a text log file when events are delivered to it. Example -> Writing to a Log File Based on an Event NTEventLogEventConsumer -> Logs a specific Message to the Windows event log when an event is delivered to it. Example -> Logging to NT Event Log Based on an Event ScriptingStandardConsumerSetting Provides registration data common to all instances of the ActiveScriptEventConsumer class. SMTPEventConsumer Sends an email Message using SMTP each time an event is delivered to it. Example -> Sending Email Based on an Event The ActiveScriptEventConsumer class allows for the execution of scripting code from either JScript or VBScript engines. Finally, the WMI script host process is %SystemRoot%\system32\wbem\scrcons.exe. Offensive Tradecraft Threat actors can achieve remote code execution by using WMI event subscriptions. Normally, a permanent WMI event subscription is designed to persist and respond to certain events. According to Matt Graeber, if an attacker wanted to execute a single payload however, the respective event consumer would just need to delete its corresponding event filter, consumer, and filter to consumer binding. The advantage of this technique is that the payload runs as SYSTEM, and it avoids having a payload be displayed in plaintext in the presence of command line auditing. Security Datasets | Metadata | Value | |:----------|:----------| | docs | https://securitydatasets.com/notebooks/atomic/windows/lateral_movement/SDWIN-200724174200.html | | link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/covenant_wmi_remote_event_subscription_ActiveScriptEventConsumers.zip | Analytics Initialize Analytics Engine End of explanation """ sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/covenant_wmi_remote_event_subscription_ActiveScriptEventConsumers.zip" registerMordorSQLTable(spark, sd_file, "sdTable") """ Explanation: Download & Process Security Dataset End of explanation """ df = spark.sql( ''' SELECT EventID, EventType FROM sdTable WHERE Channel = 'Microsoft-Windows-Sysmon/Operational' AND EventID = 20 AND LOWER(Message) Like '%type: script%' ''' ) df.show(10,False) """ Explanation: Analytic I Look for the creation of Event consumers of script type. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi consumer | 20 | End of explanation """ df = spark.sql( ''' SELECT EventID, SourceName FROM sdTable WHERE Channel = 'Microsoft-Windows-WMI-Activity/Operational' AND EventID = 5861 AND LOWER(Message) LIKE '%scriptingengine = "vbscript"%' ''' ) df.show(10,False) """ Explanation: Analytic II Look for the creation of Event consumers of script type (i.e vbscript). | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | WMI object | Microsoft-Windows-WMI-Activity/Operational | Wmi subscription created | 5861 | End of explanation """ df = spark.sql( ''' SELECT ParentImage, Image, CommandLine, ProcessId, ProcessGuid FROM sdTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 1 AND Image LIKE '%scrcons%' ''' ) df.show(10,False) """ Explanation: Analytic III Look for any indicators that the WMI script host process %SystemRoot%\system32\wbem\scrcons.exe is created. This is created by svchost.exe. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 | End of explanation """ df = spark.sql( ''' SELECT ParentProcessName, NewProcessName, CommandLine, NewProcessId FROM sdTable WHERE LOWER(Channel) = "security" AND EventID = 4688 AND NewProcessName LIKE '%scrcons%' ''' ) df.show(10,False) """ Explanation: Analytic IV Look for any indicators that the WMI script host process %SystemRoot%\system32\wbem\scrcons.exe is created. This is created by svchost.exe. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Process | Microsoft-Windows-Security-Auditing | Process created Process | 4688 | End of explanation """ df = spark.sql( ''' SELECT Image, ImageLoaded, Description, ProcessGuid FROM sdTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 7 AND LOWER(ImageLoaded) IN ( 'c:\\\windows\\\system32\\\wbem\\\scrcons.exe', 'c:\\\windows\\\system32\\\\vbscript.dll', 'c:\\\windows\\\system32\\\wbem\\\wbemdisp.dll', 'c:\\\windows\\\system32\\\wshom.ocx', 'c:\\\windows\\\system32\\\scrrun.dll' ) ''' ) df.show(10,False) """ Explanation: Analytic V Look for any indicators that the WMI script host process %SystemRoot%\system32\wbem\scrcons.exe is being used. You can do this by looking for a few modules being loaded by a process. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Module | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 | End of explanation """ df = spark.sql( ''' SELECT d.`@timestamp`, c.Image, d.DestinationIp, d.ProcessId FROM sdTable d INNER JOIN ( SELECT b.ImageLoaded, a.CommandLine, b.ProcessGuid, a.Image FROM sdTable b INNER JOIN ( SELECT ProcessGuid, CommandLine, Image FROM sdTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 1 AND Image LIKE '%scrcons.exe' ) a ON b.ProcessGuid = a.ProcessGuid WHERE b.Channel = "Microsoft-Windows-Sysmon/Operational" AND b.EventID = 7 AND LOWER(b.ImageLoaded) IN ( 'c:\\\windows\\\system32\\\wbem\\\scrcons.exe', 'c:\\\windows\\\system32\\\\vbscript.dll', 'c:\\\windows\\\system32\\\wbem\\\wbemdisp.dll', 'c:\\\windows\\\system32\\\wshom.ocx', 'c:\\\windows\\\system32\\\scrrun.dll' ) ) c ON d.ProcessGuid = c.ProcessGuid WHERE d.Channel = "Microsoft-Windows-Sysmon/Operational" AND d.EventID = 3 ''' ) df.show(10,False) """ Explanation: Analytic VI Look for any indicators that the WMI script host process %SystemRoot%\system32\wbem\scrcons.exe is being used and add some context to it that might not be normal in your environment. You can add network connections context to look for any scrcons.exe reaching out to external hosts over the network. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 | | Process | Microsoft-Windows-Sysmon/Operational | Process connected to Ip | 3 | | Module | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 | End of explanation """ df = spark.sql( ''' SELECT d.`@timestamp`, d.TargetUserName, c.Image, c.ProcessId FROM sdTable d INNER JOIN ( SELECT b.ImageLoaded, a.CommandLine, b.ProcessGuid, a.Image, b.ProcessId FROM sdTable b INNER JOIN ( SELECT ProcessGuid, CommandLine, Image FROM sdTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 1 AND Image LIKE '%scrcons.exe' ) a ON b.ProcessGuid = a.ProcessGuid WHERE b.Channel = "Microsoft-Windows-Sysmon/Operational" AND b.EventID = 7 AND LOWER(b.ImageLoaded) IN ( 'c:\\\windows\\\system32\\\wbem\\\scrcons.exe', 'c:\\\windows\\\system32\\\\vbscript.dll', 'c:\\\windows\\\system32\\\wbem\\\wbemdisp.dll', 'c:\\\windows\\\system32\\\wshom.ocx', 'c:\\\windows\\\system32\\\scrrun.dll' ) ) c ON split(d.ProcessId, '0x')[1] = LOWER(hex(CAST(c.ProcessId as INT))) WHERE LOWER(d.Channel) = "security" AND d.EventID = 4624 AND d.LogonType = 3 ''' ) df.show(10,False) """ Explanation: Analytic VII One of the main goals is to find context that could tell us that scrcons.exe was used over the network (Lateral Movement). One way would be to add a network logon session as context to some of the previous events. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 | | Module | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 | | Authentication log | Microsoft-Windows-Security-Auditing | User authenticated Host | 4624 | End of explanation """ df = spark.sql( ''' SELECT `@timestamp`, TargetUserName,ImpersonationLevel, LogonType, ProcessName FROM sdTable WHERE LOWER(Channel) = "security" AND EventID = 4624 AND LogonType = 3 AND ProcessName LIKE '%scrcons.exe' ''' ) df.show(10,False) """ Explanation: Analytic VIII One of the main goals is to find context that could tell us that scrcons.exe was used over the network (Lateral Movement). One way would be to add a network logon session as context to some of the previous events. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Authentication log | Microsoft-Windows-Security-Auditing | User authenticated Host | 4624 | End of explanation """
mne-tools/mne-tools.github.io
0.12/_downloads/plot_sensor_connectivity.ipynb
bsd-3-clause
# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu> # # License: BSD (3-clause) import numpy as np from scipy import linalg import mne from mne import io from mne.connectivity import spectral_connectivity from mne.datasets import sample print(__doc__) """ Explanation: Compute all-to-all connectivity in sensor space Computes the Phase Lag Index (PLI) between all gradiometers and shows the connectivity in 3D using the helmet geometry. The left visual stimulation data are used which produces strong connectvitiy in the right occipital sensors. End of explanation """ data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # Add a bad channel raw.info['bads'] += ['MEG 2443'] # Pick MEG gradiometers picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True, exclude='bads') # Create epochs for the visual condition event_id, tmin, tmax = 3, -0.2, 0.5 epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6)) # Compute connectivity for band containing the evoked response. # We exclude the baseline period fmin, fmax = 3., 9. sfreq = raw.info['sfreq'] # the sampling frequency tmin = 0.0 # exclude the baseline period con, freqs, times, n_epochs, n_tapers = spectral_connectivity( epochs, method='pli', mode='multitaper', sfreq=sfreq, fmin=fmin, fmax=fmax, faverage=True, tmin=tmin, mt_adaptive=False, n_jobs=1) # the epochs contain an EOG channel, which we remove now ch_names = epochs.ch_names idx = [ch_names.index(name) for name in ch_names if name.startswith('MEG')] con = con[idx][:, idx] # con is a 3D array where the last dimension is size one since we averaged # over frequencies in a single band. Here we make it 2D con = con[:, :, 0] # Now, visualize the connectivity in 3D from mayavi import mlab # noqa mlab.figure(size=(600, 600), bgcolor=(0.5, 0.5, 0.5)) # Plot the sensor locations sens_loc = [raw.info['chs'][picks[i]]['loc'][:3] for i in idx] sens_loc = np.array(sens_loc) pts = mlab.points3d(sens_loc[:, 0], sens_loc[:, 1], sens_loc[:, 2], color=(1, 1, 1), opacity=1, scale_factor=0.005) # Get the strongest connections n_con = 20 # show up to 20 connections min_dist = 0.05 # exclude sensors that are less than 5cm apart threshold = np.sort(con, axis=None)[-n_con] ii, jj = np.where(con >= threshold) # Remove close connections con_nodes = list() con_val = list() for i, j in zip(ii, jj): if linalg.norm(sens_loc[i] - sens_loc[j]) > min_dist: con_nodes.append((i, j)) con_val.append(con[i, j]) con_val = np.array(con_val) # Show the connections as tubes between sensors vmax = np.max(con_val) vmin = np.min(con_val) for val, nodes in zip(con_val, con_nodes): x1, y1, z1 = sens_loc[nodes[0]] x2, y2, z2 = sens_loc[nodes[1]] points = mlab.plot3d([x1, x2], [y1, y2], [z1, z2], [val, val], vmin=vmin, vmax=vmax, tube_radius=0.001, colormap='RdBu') points.module_manager.scalar_lut_manager.reverse_lut = True mlab.scalarbar(title='Phase Lag Index (PLI)', nb_labels=4) # Add the sensor names for the connections shown nodes_shown = list(set([n[0] for n in con_nodes] + [n[1] for n in con_nodes])) for node in nodes_shown: x, y, z = sens_loc[node] mlab.text3d(x, y, z, raw.ch_names[picks[node]], scale=0.005, color=(0, 0, 0)) view = (-88.7, 40.8, 0.76, np.array([-3.9e-4, -8.5e-3, -1e-2])) mlab.view(*view) """ Explanation: Set parameters End of explanation """
SamLau95/nbinteract
docs/notebooks/examples/examples_probability_distribution_plots.ipynb
bsd-3-clause
# Although this function doesn't appear necessary, the scipy stats functions # don't explicitly require n and p as args which causes issues with interaction def binom_pmf(xs, n, p): return stats.binom.pmf(xs, n, p) options = { 'xlabel': 'X', 'ylabel': 'probability', 'ylim': (0, 1), } nbinteract.bar(np.arange(21), binom_pmf, options=options, n=(0, 20), p=(0.0, 1.0)) """ Explanation: Probability Distribution Plots This example shows how to create interactive probability distribution demos using nbinteract.bar method. Binomial Distribution End of explanation """ def geom_pmf(xs, p): return stats.geom.pmf(xs, p) nbinteract.bar(np.arange(20), geom_pmf, options=options, p=(0.0,1.0)) """ Explanation: Geometric Distribution End of explanation """ def poisson_pmf(xs, mu): return stats.poisson.pmf(xs, mu) nbinteract.bar(np.arange(20), poisson_pmf, options=options, mu=(0, 10)) """ Explanation: Poisson Distribution End of explanation """
coryandrewtaylor/conll10
CoNLL10 output with SpaCy.ipynb
gpl-3.0
import spacy """ Explanation: Dependency parsing with spaCy This script takes Unicode plain text and outputs its dependencies in CoNLL10 format. It was originally written to prepare input files for named/non-named entity extraction with xrenner. For installation instructions for spaCy, see https://spacy.io/docs#getting-started. End of explanation """ nlp = spacy.load('en') """ Explanation: Load the English tagger Note: Loading the tagger is expensive. The documentation says it can take 10-20 seconds and 2-3 GB of RAM. End of explanation """ text = u'''1. Cato's family got its first lustre and fame from his great-grandfather Cato (a man whose virtue gained him the greatest reputation and influence among the Romans, as has been written in his Life), but the death of both parents left him an orphan, together with his brother Caepio and his sister Porcia. Cato had also a half-sister, Servilia, the daughter of his mother.1 All these children were brought up in the home of Livius Drusus, their uncle on the mother's side, who at that time was a leader in the conduct of public affairs; for he was a most powerful speaker, in general a man of the greatest discretion, and yielded to no Roman in dignity of purpose. [2] We are told that from his very childhood Cato displayed, in speech, in countenance, and in his childish sports, a nature that was inflexible, imperturbable, and altogether steadfast. He set out to accomplish his purposes with a vigour beyond his years, and while he was harsh and repellent to those who would flatter him, he was still more masterful towards those who tried to frighten him. It was altogether difficult to make him laugh, although once in a while he relaxed his features so far as to smile; and he was not quickly nor easily moved to anger, though once angered he was inexorable.''' doc = nlp(text) """ Explanation: Give spaCy some input text, then process it. Note: SpaCy input has to be in Unicode. End of explanation """ for sent in doc.sents: # Create lookup dict for token IDs. ids = {} for i, token in enumerate(sent): ids[token.idx] = i+1 for token in sent: # Clean up token attributes token_id = str(ids[token.idx]).strip() token_text = str(token).strip() lemma = str(token.lemma_).strip() pos_tag = str(token.tag_).strip() depend = str(token.dep_).strip() # Set head ID correctly for root of sentence. if token.dep_ == 'ROOT': head_id = str(0) else: head_id = str(ids[token.head.idx]).strip() # CoNLL10 output # Comments below are modified from https://corpling.uis.georgetown.edu/xrenner/doc/using.html#input-format print(token_id + '\t' + # token ID w/in sentence token_text + '\t' + # token text lemma + '\t' + # lemmatized token pos_tag + '\t' + # part of speech tag for token pos_tag + '\t' + # part of speech tag for token '_' + '\t' + # placeholder for morphological information head_id + '\t' + # ID of head token depend + '\t' + # dependency function '_' + '\t' + '_') # two unused columns """ Explanation: CoNLL10 output SpaCy's output--in particular, its token IDs--takes some massaging in order to produce a well-formed CoNLL10 document. The column layout is described here. End of explanation """
statsmodels/statsmodels.github.io
v0.13.0/examples/notebooks/generated/statespace_varmax.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt dta = sm.datasets.webuse('lutkepohl2', 'https://www.stata-press.com/data/r12/') dta.index = dta.qtr dta.index.freq = dta.index.inferred_freq endog = dta.loc['1960-04-01':'1978-10-01', ['dln_inv', 'dln_inc', 'dln_consump']] """ Explanation: VARMAX models This is a brief introduction notebook to VARMAX models in statsmodels. The VARMAX model is generically specified as: $$ y_t = \nu + A_1 y_{t-1} + \dots + A_p y_{t-p} + B x_t + \epsilon_t + M_1 \epsilon_{t-1} + \dots M_q \epsilon_{t-q} $$ where $y_t$ is a $\text{k_endog} \times 1$ vector. End of explanation """ exog = endog['dln_consump'] mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(2,0), trend='n', exog=exog) res = mod.fit(maxiter=1000, disp=False) print(res.summary()) """ Explanation: Model specification The VARMAX class in statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument). Example 1: VAR Below is a simple VARX(2) model in two endogenous variables and an exogenous series, but no constant term. Notice that we needed to allow for more iterations than the default (which is maxiter=50) in order for the likelihood estimation to converge. This is not unusual in VAR models which have to estimate a large number of parameters, often on a relatively small number of time series: this model, for example, estimates 27 parameters off of 75 observations of 3 variables. End of explanation """ ax = res.impulse_responses(10, orthogonalized=True, impulse=[1, 0]).plot(figsize=(13,3)) ax.set(xlabel='t', title='Responses to a shock to `dln_inv`'); """ Explanation: From the estimated VAR model, we can plot the impulse response functions of the endogenous variables. End of explanation """ mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(0,2), error_cov_type='diagonal') res = mod.fit(maxiter=1000, disp=False) print(res.summary()) """ Explanation: Example 2: VMA A vector moving average model can also be formulated. Below we show a VMA(2) on the same data, but where the innovations to the process are uncorrelated. In this example we leave out the exogenous regressor but now include the constant term. End of explanation """ mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(1,1)) res = mod.fit(maxiter=1000, disp=False) print(res.summary()) """ Explanation: Caution: VARMA(p,q) specifications Although the model allows estimating VARMA(p,q) specifications, these models are not identified without additional restrictions on the representation matrices, which are not built-in. For this reason, it is recommended that the user proceed with error (and indeed a warning is issued when these models are specified). Nonetheless, they may in some circumstances provide useful information. End of explanation """
google/applied-machine-learning-intensive
content/03_regression/04_polynomial_regression/colab.ipynb
apache-2.0
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/03_regression/04_polynomial_regression/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Copyright 2020 Google LLC. End of explanation """ import numpy as np import matplotlib.pyplot as plt num_items = 100 np.random.seed(seed=420) X = np.random.randn(num_items, 1) # These coefficients are chosen arbitrarily. y = 0.6*(X**2) - 0.4*X + 1.3 plt.plot(X, y, 'b.') plt.show() """ Explanation: Polynomial Regression and Overfitting So far in this course, we have dealt exclusively with linear models. These have all been "straight-line" models where we attempt to draw a straight line that fits a regression. Today we will start building curved-lined models based on polynomial equations. Generating Sample Data Let's start by generating some data based on a second degree polynomial. End of explanation """ import numpy as np import matplotlib.pyplot as plt num_items = 100 np.random.seed(seed=420) X = np.random.randn(num_items, 1) # Create some randomness. randomness = np.random.randn(num_items, 1) / 2 # This is the same equation as the plot above, with added randomness. y = 0.6*(X**2) - 0.4*X + 1.3 + randomness X_line = np.linspace(X.min(), X.max(), num=num_items) y_line = 0.6*(X_line**2) - 0.4*X_line + 1.3 plt.plot(X, y, 'b.') plt.plot(X_line, y_line, 'r-') plt.show() """ Explanation: Let's add some randomness to create a more realistic dataset and re-plot the randomized data points and the fit line. End of explanation """ from sklearn.preprocessing import PolynomialFeatures pf = PolynomialFeatures(degree=2) X_poly = pf.fit_transform(X) X.shape, X_poly.shape """ Explanation: That looks much better! Now we can see that a 2-degree polynomial function fits this data reasonably well. Polynomial Fitting We can now see a pretty obvious 2-degree polynomial that fits the scatter plot. Scikit-learn offers a PolynomialFeatures class that handles polynomial combinations for a linear model. In this case, we know that a 2-degree polynomial is a good fit since the data was generated from a polynomial curve. Let's see if the model works. We begin by creating a PolynomialFeatures instance of degree 2. End of explanation """ from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ """ Explanation: You might be wondering what the include_bias parameter is. By default, it is True, in which case it forces the first exponent to be 0. This adds a constant bias term to the equation. When we ask for no bias we start our exponents at 1 instead of 0. This preprocessor generates a new feature matrix consisting of all polynomial combinations of the features. Notice that the input shape of (100, 1) becomes (100, 2) after transformation. In this simple case, we doubled the number of features since we asked for a 2-degree polynomial and had one input feature. The number of generated features grows exponentially as the number of features and polynomial degrees increases. Model Fitting We can now fit the model by passing our polynomial preprocessing data to the linear regressor. How close did the intercept and coefficient match the values in the function we used to generate our data? End of explanation """ np.random.seed(seed=420) # Create 100 even-spaced x-values. X_line_fitted = np.linspace(X.min(), X.max(), num=100) # Start our equation with the intercept. y_line_fitted = lin_reg.intercept_ # For each exponent, raise the X value to that exponent and multiply it by the # appropriate coefficient for i in range(len(pf.powers_)): exponent = pf.powers_[i][0] y_line_fitted = y_line_fitted + \ lin_reg.coef_[0][i] * (X_line_fitted**exponent) plt.plot(X_line_fitted, y_line_fitted, 'g-') plt.plot(X_line, y_line, 'r-') plt.plot(X, y, 'b.') plt.show() """ Explanation: Visualization We can plot our fitted line against the equation we used to generate the data. The fitted line is green, and the actual curve is red. End of explanation """ np.random.seed(seed=420) # Create 50 points from a linear dataset with randomness. num_items = 50 X = 6 * np.random.rand(num_items, 1) y = X + 2 + np.random.randn(num_items, 1) X_line = np.array([X.min(), X.max()]) y_line = X_line + 2 plt.plot(X_line, y_line, 'r-') plt.plot(X, y, 'b.') plt.show() """ Explanation: Overfitting When using polynomial regression, it can be easy to overfit the data so that it performs well on the training data but doesn't perform well in the real world. To understand overfitting we will create a fake dataset generated off of a linear equation, but we will use a polynomial regression as the model. End of explanation """ from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression np.random.seed(seed=420) poly_features = PolynomialFeatures(degree=10, include_bias=False) X_poly = poly_features.fit_transform(X) regression = LinearRegression() regression.fit(X_poly, y) """ Explanation: Let's now create a 10 degree polynomial to fit the linear data and fit the model. End of explanation """ poly_features.powers_ """ Explanation: Visualization Let's draw the polynomial line that we fit to the data. To draw the line, we need to execute the 10 degree polynomial equation. $$ y = k_0 + k_1x^1 + k_2x^2 + k_3x^3 + ... + k_9x^9 + k_{10}x^{10} $$ Coding the above equation by hand is tedious and error-prone. It also makes it difficult to change the degree of the polynomial we are fitting. Let's see if there is a way to write the code more dynamically, using the PolynomialFeatures and LinearRegression functions. The PolynomialFeatures class provides us with a list of exponents that we can use for each portion of the polynomial equation. End of explanation """ regression.coef_ """ Explanation: The LinearRegression class provides us with a list of coefficients that correspond to the powers provided by PolynomialFeatures. End of explanation """ regression.intercept_ """ Explanation: It also provides an intercept. End of explanation """ np.random.seed(seed=420) # Create 100 even-spaced x-values. X_line_fitted = np.linspace(X.min(), X.max(), num=100) # Start our equation with the intercept. y_line_fitted = regression.intercept_ # For each exponent, raise the X value to that exponent and multiply it by the # appropriate coefficient for i in range(len(poly_features.powers_)): exponent = poly_features.powers_[i][0] y_line_fitted = y_line_fitted + \ regression.coef_[0][i] * (X_line_fitted**exponent) """ Explanation: Having this information, we can take a set of $X$ values (in the code below we use 100), then run our equation on those values. End of explanation """ plt.plot(X_line, y_line, 'r-') plt.plot(X_line_fitted, y_line_fitted, 'g-') plt.plot(X, y, 'b.') plt.show() """ Explanation: We can now plot the data points, the actual line used to generate them, and our fitted model. End of explanation """ from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) regression = LinearRegression() regression.fit(X_poly, y) X_line_fitted = np.linspace(X.min(), X.max(), num=100) y_line_fitted = regression.intercept_ for i in range(len(poly_features.powers_)): exponent = poly_features.powers_[i][0] y_line_fitted = y_line_fitted + \ regression.coef_[0][i] * (X_line_fitted**exponent) plt.plot(X_line, y_line, 'r-') plt.plot(X_line_fitted, y_line_fitted, 'g-') plt.plot(X, y, 'b.') plt.show() """ Explanation: Notice how our line is very wavy, and it spikes up and down to pass through specific data points. (This is especially true for the lowest and highest $x$-values, where the curve passes through them exactly.) This is a sign of overfitting. The line fits the training data reasonably well, but it may not be as useful on new data. Using a Simpler Model The most obvious way to prevent overfitting in this example is to simply reduce the degree of the polynomial. The code below uses a 2-degree polynomial and seems to fit the data much better. A linear model would work well too. End of explanation """ from sklearn.linear_model import Lasso poly_features = PolynomialFeatures(degree=10, include_bias=False) X_poly = poly_features.fit_transform(X) lasso_reg = Lasso(alpha=5.0) lasso_reg.fit(X_poly, y) X_line_fitted = np.linspace(X.min(), X.max(), num=100) y_line_fitted = lasso_reg.intercept_ for i in range(len(poly_features.powers_)): exponent = poly_features.powers_[i][0] y_line_fitted = y_line_fitted + lasso_reg.coef_[i] * (X_line_fitted**exponent) plt.plot(X_line, y_line, 'r-') plt.plot(X_line_fitted, y_line_fitted, 'g-') plt.plot(X, y, 'b.') plt.show() """ Explanation: Lasso Regularization It is not always so clear what the "simpler" model choice is. Often, you will have to rely on regularization methods. A regularization is a method that penalizes large coefficients, with the aim of shrinking unnecessary coefficients to zero. Least Absolute Shrinkage and Selection Operator (Lasso) regularization, also called L1 regularization, is a regularization method that adds the sum of the absolute values of the coefficients as a penalty in a cost function. In scikit-learn, we can use the Lasso model, which performs a linear regression with an L1 regression penalty. In the resultant graph, you can see that the regression smooths out our polynomial curve quite a bit despite the polynomial being a degree 10 polynomial. Note that Lasso regression can make the impact of less important features completely disappear. End of explanation """ from sklearn.linear_model import Ridge poly_features = PolynomialFeatures(degree=10, include_bias=False) X_poly = poly_features.fit_transform(X) ridge_reg = Ridge(alpha=0.5) ridge_reg.fit(X_poly, y) X_line_fitted = np.linspace(X.min(), X.max(), num=100) y_line_fitted = ridge_reg.intercept_ for i in range(len(poly_features.powers_)): exponent = poly_features.powers_[i][0] y_line_fitted = y_line_fitted + ridge_reg.coef_[0][i] * (X_line_fitted**exponent) plt.plot(X_line, y_line, 'r-') plt.plot(X_line_fitted, y_line_fitted, 'g-') plt.plot(X, y, 'b.') plt.show() """ Explanation: Ridge Regularization Similar to Lasso regularization, Ridge regularization adds a penalty to the cost function of a model. In the case of Ridge, also called L2 regularization, the penalty is the sum of squares of the coefficients. Again, we can see that the regression smooths out the curve of our 10-degree polynomial. End of explanation """ from sklearn.linear_model import ElasticNet poly_features = PolynomialFeatures(degree=10, include_bias=False) X_poly = poly_features.fit_transform(X) elastic_reg = ElasticNet(alpha=2.0, l1_ratio=0.5) elastic_reg.fit(X_poly, y) X_line_fitted = np.linspace(X.min(), X.max(), num=100) y_line_fitted = elastic_reg.intercept_ for i in range(len(poly_features.powers_)): exponent = poly_features.powers_[i][0] y_line_fitted = y_line_fitted + \ elastic_reg.coef_[i] * (X_line_fitted**exponent) plt.plot(X_line, y_line, 'r-') plt.plot(X_line_fitted, y_line_fitted, 'g-') plt.plot(X, y, 'b.') plt.show() """ Explanation: ElasticNet Regularization Another common form of regularization is ElasticNet regularization. This regularization method combines the concepts of L1 and L2 regularization by applying a penalty containing both a squared value and an absolute value. End of explanation """ from sklearn.datasets import load_diabetes import numpy as np import pandas as pd data = load_diabetes() df = pd.DataFrame(data.data, columns=data.feature_names) df['progression'] = data.target df.describe() """ Explanation: Other Strategies Aside from regularization, there are other strategies that can be used to prevent overfitting. These include: Early stopping [Cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics) Ensemble methods Simplifying your model Removing features Exercises For these exercises we will work with the diabetes dataset that comes with scikit-learn. The data contains the following features: age sex body mass index (bmi) average blood pressure (bp) It also contains six measures of blood serum, s1 through s6. The target is a numeric assessment of the progression of the disease over the course of a year. The data has been standardized. End of explanation """ import matplotlib.pyplot as plt plt.plot(df['bmi'], df['bp'], 'b.') plt.show() """ Explanation: Let's plot how body mass index relates to blood pressure. End of explanation """ # Your code goes here """ Explanation: Exercise 1: Polynomial Regression Let's create a model to see if we can map body mass index to blood pressure. Create a 10-degree polynomial preprocessor for our regression Create a linear regression model Fit and transform the bmi values with the polynomial features preprocessor Fit the transformed data using the linear regression Plot the fitted line over a scatter plot of the data points Student Solution End of explanation """ # Your code goes here """ Explanation: Exercise 2: Regularization Your model from exercise one likely looked like it overfit. Experiment with the Lasso, Ridge, and/or ElasticNet classes in the place of the LinearRegression. Adjust the parameters for whichever regularization class you use until you create a line that doesn't look to be under- or over-fitted. Student Solution End of explanation """ # Your code goes here. """ Explanation: Exercise 3: Other Models Experiment with the BayesianRidge. Does its fit line look better or worse than your other models? Student Solution End of explanation """
IIPBC/Material
Notebooks/Python BootCamp 2017 - A example of a notebook.ipynb
mit
import numpy x = numpy.arange(0, 100, 0.1) y = numpy.cos(x) """ Explanation: Sample Notebook This Jupyter Notebook is intended to be an example with some references. Jupyter uses Markdown Syntax and accept LaTeX and HTML codes. With this, one can easily write bold or italic words. One can also type some code inline or code block like bellow: Python import numpy x = numpy.arange(10) print x One can write formulas using LaTeX syntax like $\cos \left( x \right) = \frac{X}{Y}$ or create numerated lists like: Item 1 Item 2 Item 3 Or unordered lists: Item 1 Item 2 Item 3 <h1>This is a title in HTML</h1> <p>So you can edit your notebook by using HTML syntax also if you want.</p> <hr> Of course, you can run Python Codes too. End of explanation """ import matplotlib.pyplot as plt plt.plot(x, y) """ Explanation: There is a bug on MatPlotLib that does not allow it to run with Jupyter and some backends. When one tries to do some plot, the following error may appear: End of explanation """ %matplotlib inline import matplotlib.pyplot as plt plt.plot(x, y) """ Explanation: If something like that happens to you, simply add %matplotlib inline before importing it. End of explanation """
mne-tools/mne-tools.github.io
0.14/_downloads/plot_ssp_projs_sensitivity_map.ipynb
bsd-3-clause
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import matplotlib.pyplot as plt from mne import read_forward_solution, read_proj, sensitivity_map from mne.datasets import sample print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' ecg_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif' fwd = read_forward_solution(fname, surf_ori=True) projs = read_proj(ecg_fname) # take only one projection per channel type projs = projs[::2] # Compute sensitivity map ssp_ecg_map = sensitivity_map(fwd, ch_type='grad', projs=projs, mode='angle') """ Explanation: Sensitivity map of SSP projections This example shows the sources that have a forward field similar to the first SSP vector correcting for ECG. End of explanation """ plt.hist(ssp_ecg_map.data.ravel()) plt.show() args = dict(clim=dict(kind='value', lims=(0.2, 0.6, 1.)), smoothing_steps=7, hemi='rh', subjects_dir=subjects_dir) ssp_ecg_map.plot(subject='sample', time_label='ECG SSP sensitivity', **args) """ Explanation: Show sensitivity map End of explanation """
turi-code/tutorials
webinars/product-reviews/text_demo.ipynb
apache-2.0
import graphlab as gl from graphlab.toolkits.text_analytics import trim_rare_words, split_by_sentence, extract_part_of_speech, stopwords, PartOfSpeech def nlp_pipeline(reviews, title, aspects): print(title) print('1. Get reviews for this product') reviews = reviews.filter_by(title, 'name') print('2. Splitting reviews into sentences') reviews['sentences'] = split_by_sentence(reviews['review']) sentences = reviews.stack('sentences', 'sentence').dropna() print('3. Tagging relevant reviews') tags = gl.SFrame({'tag': aspects}) tagger_model = gl.data_matching.autotagger.create(tags, verbose=False) tagged = tagger_model.tag(sentences, query_name='sentence', similarity_threshold=.3, verbose=False)\ .join(sentences, on='sentence') print('4. Extracting adjectives') tagged['cleaned'] = trim_rare_words(tagged['sentence'], stopwords=list(stopwords())) tagged['adjectives'] = extract_part_of_speech(tagged['cleaned'], [PartOfSpeech.ADJ]) print('5. Predicting sentence-level sentiment') model = gl.sentiment_analysis.create(tagged, features=['review']) tagged['sentiment'] = model.predict(tagged) return tagged reviews = gl.SFrame('amazon_baby.gl') reviews from helper_util import * """ Explanation: Analyzing unstructured text in product review data It's common for companies to have useful data hidden in large volumes of text: online reviews social media posts and tweets interactions with customers, such as emails and call center transcripts For example, when shopping it can be challenging to decide between products with the same star rating. When this happens, shoppers often sift through the raw text of reviews to understand the strengths and weaknesses of each option. <img src="ItemC.png"> <img src="ItemD.png"> In this notebook we seek to automate the task of determining product strengths and weaknesses from review text. splitting Amazon review text into sentences and applying a sentiment analysis model tagging documents that mention aspects of interest extract adjectives from raw text, and compare their use in positive and negative reviews summarizing the use of adjectives for tagged documents GraphLab Create includes feature engineering objects that leverage spaCy, a high performance NLP package. Here we use it for extracting parts of speech and parsing reviews into sentences. End of explanation """ aspects = ['audio', 'price', 'signal', 'range', 'battery life'] reviews = search(reviews, 'monitor') reviews """ Explanation: Focus on chosen aspects about baby monitors End of explanation """ item_a = 'Infant Optics DXR-5 2.4 GHz Digital Video Baby Monitor with Night Vision' reviews_a = nlp_pipeline(reviews, item_a, aspects) reviews_a """ Explanation: Process reviews for the most common product End of explanation """ dropdown = get_dropdown(reviews) display(dropdown) item_b = dropdown.value reviews_b = nlp_pipeline(reviews, item_b, aspects) counts, sentiment, adjectives = get_comparisons(reviews_a, reviews_b, item_a, item_b, aspects) """ Explanation: Comparing to another product End of explanation """ counts """ Explanation: Comparing the number of sentences that mention each aspect End of explanation """ sentiment """ Explanation: Comparing the sentence-level sentiment for each aspect of each product End of explanation """ adjectives """ Explanation: Comparing the use of adjectives for each aspect End of explanation """ good, bad = get_extreme_sentences(reviews_a) """ Explanation: Investigating good and bad sentences End of explanation """ print_sentences(good['highlighted']) """ Explanation: Print good sentences for the first item, where adjectives and aspects are highlighted. End of explanation """ print_sentences(bad['highlighted']) """ Explanation: Print bad sentences for the first item, where adjectives and aspects are highlighted. End of explanation """ service = gl.deploy.predictive_service.load("s3://gl-demo-usw2/predictive_service/demolab/ps-1.8.5") service.get_predictive_objects_status() def word_count(text): sa = gl.SArray([text]) sa = gl.text_analytics.count_words(sa) return sa[0] service.update('chris_bow', word_count) service.apply_changes() service.query('chris_bow', text=["It's a beautiful day in the neighborhood. Beautiful day for a neighbor."]) """ Explanation: Deployment End of explanation """
cgpotts/cs224u
rel_ext_01_task.ipynb
apache-2.0
__author__ = "Bill MacCartney and Christopher Potts" __version__ = "CS224u, Stanford, Spring 2022" """ Explanation: Relation extraction using distant supervision: task definition End of explanation """ import random import os from collections import Counter, defaultdict import rel_ext import utils # Set all the random seeds for reproducibility. Only the # system seed is relevant for this notebook. utils.fix_random_seeds() rel_ext_data_home = os.path.join('data', 'rel_ext_data') """ Explanation: Contents Overview The task of relation extraction Hand-built patterns Supervised learning Distant supervision Set-up The corpus The knowledge base Problem formulation Joining the corpus and the KB Negative instances Multi-label classification Building datasets Evaluation Splitting the data Choosing evaluation metrics Running evaluations Evaluating a random-guessing strategy A simple baseline model Overview This notebook illustrates an approach to relation extraction using distant supervision. It uses a simplified version of the approach taken by Mintz et al. in their 2009 paper, Distant supervision for relation extraction without labeled data. If you haven't yet read that paper, read it now! The rest of the notebook will make a lot more sense after you're familiar with it. The task of relation extraction Relation extraction is the task of extracting from natural language text relational triples such as: (founders, SpaceX, Elon_Musk) (has_spouse, Elon_Musk, Talulah_Riley) (worked_at, Elon_Musk, Tesla_Motors) If we can accumulate a large knowledge base (KB) of relational triples, we can use it to power question answering and other applications. Building a KB manually is slow and expensive, but much of the knowledge we'd like to capture is already expressed in abundant text on the web. The aim of relation extraction, therefore, is to accelerate the construction of new KBs — and facilitate the ongoing curation of existing KBs — by extracting relational triples from natural language text. Hand-built patterns An obvious way to start is to write down a few patterns which express each relation. For example, we can use the pattern "X is the founder of Y" to find new instances of the founders relation. If we search a large corpus, we may find the phrase "Elon Musk is the founder of SpaceX", which we can use as evidence for the relational triple (founders, SpaceX, Elon_Musk). Unfortunately, this approach doesn't get us very far. The central challenge of relation extraction is the fantastic diversity of language, the multitude of possible ways to express a given relation. For example, each of the following sentences expresses the relational triple (founders, SpaceX, Elon_Musk): "You may also be thinking of Elon Musk (founder of SpaceX), who started PayPal." "Interesting Fact: Elon Musk, co-founder of PayPal, went on to establish SpaceX, one of the most promising space travel startups in the world." "If Space Exploration (SpaceX), founded by Paypal pioneer Elon Musk succeeds, commercial advocates will gain credibility and more support in Congress." The patterns which connect "Elon Musk" with "SpaceX" in these examples are not ones we could have easily anticipated. To do relation extraction effectively, we need to go beyond hand-built patterns. Supervised learning Effective relation extraction will require applying machine learning methods. The natural place to start is with supervised learning. This means training an extraction model from a dataset of examples which have been labeled with the target output. Sentences like the three examples above would be annotated with the founders relation, but we'd also have sentences which include "Elon Musk" and "SpaceX" but do not express the founders relation, such as: "Billionaire entrepreneur Elon Musk announced the latest addition to the SpaceX arsenal: the 'Big F---ing Rocket' (BFR)". Such "negative examples" would be labeled as such, and the fully-supervised model would then be able to learn from both positive and negative examples the linguistic patterns that indicate each relation. The difficulty with the fully-supervised approach is the cost of generating training data. Because of the great diversity of linguistic expression, our model will need lots and lots of training data: at least tens of thousands of examples, although hundreds of thousands or millions would be much better. But labeling the examples is just as slow and expensive as building the KB by hand would be. Distant supervision The goal of distant supervision is to capture the benefits of supervised learning without paying the cost of labeling training data. Instead of labeling extraction examples by hand, we use existing relational triples to automatically identify extraction examples in a large corpus. For example, if we already have in our KB the relational triple (founders, SpaceX, Elon_Musk), we can search a large corpus for sentences in which "SpaceX" and "Elon Musk" co-occur, make the (unreliable!) assumption that all the sentences express the founder relation, and then use them as training data for a learned model to identify new instances of the founder relation — all without doing any manual labeling. This is a powerful idea, but it has two limitations. The first is that, inevitably, some of the sentences in which "SpaceX" and "Elon Musk" co-occur will not express the founder relation — like the BFR example above. By making the blind assumption that all such sentences do express the founder relation, we are essentially injecting noise into our training data, and making it harder for our learning algorithms to learn good models. Distant supervision is effective in spite of this problem because it makes it possible to leverage vastly greater quantities of training data, and the benefit of more data outweighs the harm of noisier data. The second limitation is that we need an existing KB to start from. We can only train a model to extract new instances of the founders relation if we already have many instances of the founders relation. Thus, while distant supervision is a great way to extend an existing KB, it's not useful for creating a KB containing new relations from scratch. [ top ] Set-up Make sure your environment includes all the requirements for the cs224u repository. If you haven't already, download the course data, unpack it, and place it in the directory containing the course repository – the same directory as this notebook. (If you want to put it somewhere else, change rel_ext_data_home below.) End of explanation """ corpus = rel_ext.Corpus(os.path.join(rel_ext_data_home, 'corpus.tsv.gz')) print('Read {0:,} examples'.format(len(corpus))) """ Explanation: [ top ] The corpus As usual when we're doing NLP, we need to start with a corpus — a large sample of natural language text. And because our goal is to do relation extraction with distant supervision, we need to be able to identify entities in the text and connect them to a knowledge base of relations between entities. So, we need a corpus in which entity mentions are annotated with entity resolutions which map them to unique, unambiguous identifiers. Entity resolution serves two purposes: It ensures that if an entity mention could refer to two different entities, it is properly disambiguated. For example, "New York" could refer to the city or the state. It ensures that if two different entity mentions refer to the same entity, they are properly identified. For example, both "New York City" and "The Big Apple" refer to New York City. The corpus we'll use for this project is derived from the Wikilinks dataset announced by Google in 2013. This dataset contains over 40M mentions of 3M distinct entities spanning 10M webpages. It provides entity resolutions by mapping each entity mention to a Wikipedia URL. Now, in order to do relation extraction, we actually need pairs of entity mentions, and it's important to have the context around and between the two mentions. Fortunately, UMass has provided an expanded version of Wikilinks which includes the context around each entity mention. We've written code to stitch together pairs of entity mentions along with their contexts, and we've filtered the examples extensively. The result is a compact corpus suitable for our purposes. Because we're frequently going to want to retrieve corpus examples containing specific entities, we've created a Corpus class which holds not only the examples themselves, but also a precomputed index. Let's take a closer look. End of explanation """ print(corpus.examples[1]) """ Explanation: Great, that's a lot of examples! Let's take a closer look at one. End of explanation """ ex = corpus.examples[1] ' '.join((ex.left, ex.mention_1, ex.middle, ex.mention_2, ex.right)) """ Explanation: Every example represents a fragment of webpage text containing two entity mentions. The first two fields, entity_1 and entity_2, contain unique identifiers for the two entities mentioned. We name entities using Wiki IDs, which you can think of as the last portion of a Wikipedia URL. Thus the Wiki ID Barack_Obama designates the entity described by https://en.wikipedia.org/wiki/Barack_Obama. The next five fields represent the text surrounding the two mentions, divided into five chunks: left contains the text before the first mention, mention_1 is the first mention itself, middle contains the text between the two mentions, mention_2 is the second mention, and the field right contains the text after the second mention. Thus, we can reconstruct the context as a single string like this: End of explanation """ counter = Counter() for example in corpus.examples: counter[example.entity_1] += 1 counter[example.entity_2] += 1 print('The corpus contains {} entities'.format(len(counter))) counts = sorted([(count, key) for key, count in counter.items()], reverse=True) print('The most common entities are:') for count, key in counts[:20]: print('{:10d} {}'.format(count, key)) """ Explanation: The last five fields contain the same five chunks of text, but this time annotated with part-of-speech (POS) tags, which may turn out to be useful when we start building models for relation extraction. Let's look at the distribution of entities over the corpus. How many entities are there, and what are the most common ones? End of explanation """ corpus.show_examples_for_pair('Elon_Musk', 'Tesla_Motors') """ Explanation: The main benefit we gain from the Corpus class is the ability to retrieve examples containing specific entities. Let's find examples containing Elon_Musk and Tesla_Motors. End of explanation """ corpus.show_examples_for_pair('Tesla_Motors', 'Elon_Musk') """ Explanation: Actually, this might not be all of the examples containing Elon_Musk and Tesla_Motors. It's only the examples where Elon_Musk was mentioned first and Tesla_Motors second. There may be additional examples that have them in the reverse order. Let's check. End of explanation """ kb = rel_ext.KB(os.path.join(rel_ext_data_home, 'kb.tsv.gz')) print('Read {0:,} KB triples'.format(len(kb))) """ Explanation: Sure enough. Going forward, we'll have to remember to check both "directions" when we're looking for examples contains a specific pair of entities. This corpus is not without flaws. As you get more familiar with it, you will likely discover that it contains many examples that are nearly — but not exactly — duplicates. This seems to be a consequence of the web document sampling methodology that was used in the construction of the Wikilinks dataset. However, despite a few warts, it will serve our purposes. One thing this corpus does not include is any annotation about relations. Thus, it could not be used for the fully-supervised approach to relation extraction, because the fully-supervised approach requires that each pair of entity mentions be annotated with the relation (if any) that holds between the two entities. In order to make any headway, we'll need to connect the corpus with an external source of knowledge about relations. We need a knowledge base. [ top ] The knowledge base The data distribution for this unit includes a knowledge base (KB) ultimately derived from Freebase. Unfortunately, Freebase was shut down in 2016, but the Freebase data is still available from various sources and in various forms. The KB included here was extracted from the Freebase Easy data dump. The KB is a collection of relational triples, each consisting of a relation, a subject, and an object. For example, here are three triples from the KB: (place_of_birth, Barack_Obama, Honolulu) (has_spouse, Barack_Obama, Michelle_Obama) (author, The_Audacity_of_Hope, Barack_Obama) As you might guess: The relation is one of a handful of predefined constants, such as place_of_birth or has_spouse. The subject and object are entities represented by Wiki IDs (that is, suffixes of Wikipedia URLs). Now, just as we did for the corpus, we've created a KB class to store the KB triples and some associated indexes. This class makes it easy and efficient to look up KB triples both by relation and by entities. End of explanation """ len(kb.all_relations) """ Explanation: Let's get a sense of the high-level characteristics of this KB. Some questions we'd like to answer: How many relations are there? How big is each relation? Examples of each relation. How many unique entities does the KB include? End of explanation """ for rel in kb.all_relations: print('{:12d} {}'.format(len(kb.get_triples_for_relation(rel)), rel)) """ Explanation: How big is each relation? That is, how many triples does each relation contain? End of explanation """ for rel in kb.all_relations: print(tuple(kb.get_triples_for_relation(rel)[0])) """ Explanation: Let's look at one example from each relation, so that we can get a sense of what they mean. End of explanation """ kb.get_triples_for_entities('France', 'Germany') """ Explanation: The kb.get_triples_for_entities() method allows us to look up triples by the entities they contain. Let's use it to see what relation(s) hold between France and Germany. End of explanation """ kb.get_triples_for_entities('Germany', 'France') """ Explanation: Relations like adjoins and has_sibling are intuitively symmetric — if the relation holds between X and Y, then we expect it to hold between Y and X as well. End of explanation """ kb.get_triples_for_entities('Tesla_Motors', 'Elon_Musk') """ Explanation: However, there's no guarantee that all such inverse triples actually appear in the KB. (You could write some code to check.) Most relations, however, are intuitively asymmetric. Let's see what relation holds between Tesla_Motors and Elon_Musk. End of explanation """ kb.get_triples_for_entities('Elon_Musk', 'Tesla_Motors') """ Explanation: It's a bit arbitrary that the KB includes a given asymmetric relation rather than its inverse. For example, instead of the founders relation with triple (founders, Tesla_Motors, Elon_Musk), we might have had a founder_of relation with triple (founder_of, Elon_Musk, Tesla_Motors). It doesn't really matter. Although we don't have a founder_of relation, there might still be a relation between Elon_Musk and Tesla_Motors. Let's check. End of explanation """ kb.get_triples_for_entities('Cleopatra', 'Ptolemy_XIII_Theos_Philopator') """ Explanation: Aha, yes, that makes sense. So it can be the case that one relation holds between X and Y, and a different relation holds between Y and X. One more observation: there may be more than one relation that holds between a given pair of entities, even in one direction. End of explanation """ counter = Counter() for kbt in kb.kb_triples: counter[kbt.sbj] += 1 counter[kbt.obj] += 1 print('The KB contains {:,} entities'.format(len(counter))) counts = sorted([(count, key) for key, count in counter.items()], reverse=True) print('The most common entities are:') for count, key in counts[:20]: print('{:10d} {}'.format(count, key)) """ Explanation: No! What? Yup, it's true — Cleopatra married her younger brother, Ptolemy XIII. Wait, it gets worse — she also married her even younger brother, Ptolemy XIV. Apparently this was normal behavior in ancient Egypt. Moving on ... Let's look at the distribution of entities in the KB. How many entities are there, and what are the most common ones? End of explanation """ dataset = rel_ext.Dataset(corpus, kb) """ Explanation: The number of entities in the KB is less than half the number of entities in the corpus! Evidently the corpus has much broader coverage than the KB. Note that there is no promise or expectation that this KB is complete. Not only does the KB contain no mention of many entities from the corpus — even for the entities it does include, there may be possible triples which are true in the world but are missing from the KB. As an example, these triples are in the KB: (founders, SpaceX, Elon_Musk) (founders, Tesla_Motors, Elon_Musk) (worked_at, Elon_Musk, Tesla_Motors) but this one is not: (worked_at, Elon_Musk, SpaceX) In fact, the whole point of developing methods for automatic relation extraction is to extend existing KBs (and build new ones) by identifying new relational triples from natural language text. If our KBs were complete, we wouldn't have anything to do. [ top ] Problem formulation With our data assets in hand, it's time to provide a precise formulation of the prediction problem we aim to solve. We need to specify: What is the input to the prediction? Is it a specific pair of entity mentions in a specific context? Or is it a pair of entities, apart from any specific mentions? What is the output of the prediction? Do we need to predict at most one relation label? (This is multi-class classification.) Or can we predict multiple relation labels? (This is multi-label classification.) Joining the corpus and the KB In order to leverage the distant supervision paradigm, we'll need to connect information in the corpus with information in the KB. There are two possibilities, depending on how we formulate our prediction problem: Use the KB to generate labels for the corpus. If our problem is to classify a pair of entity mentions in a specific example in the corpus, then we can use the KB to provide labels for training examples. Labeling specific examples is how the fully supervised paradigm works, so it's the obvious way to think about leveraging distant supervision as well. Although it can be made to work, it's not actually the preferred approach. Use the corpus to generate features for entity pairs. If instead our problem is to classify a pair of entities, then we can use all the examples from the corpus where those two entities co-occur to generate a feature representation describing the entity pair. This is the approach taken by Mintz et al. 2009, and it's the approach we'll pursue here. So we'll formulate our prediction problem such that the input is a pair of entities, and the goal is to predict what relation(s) the pair belongs to. The KB will provide the labels, and the corpus will provide the features. We've created a Dataset class which combines a corpus and a KB, and provides a variety of convenience methods for the dataset. End of explanation """ dataset.count_examples() """ Explanation: Let's determine how many examples we have for each triple in the KB. We'll compute averages per relation. End of explanation """ unrelated_pairs = dataset.find_unrelated_pairs() print('Found {0:,} unrelated pairs, including:'.format(len(unrelated_pairs))) for pair in list(unrelated_pairs)[:10]: print(' ', pair) """ Explanation: For most relations, the total number of examples is fairly large, so we can be optimistic about learning what linguistic patterns express a given relation. However, for individual entity pairs, the number of examples is often quite low. Of course, more data would be better — much better! But more data could quickly become unwieldy to work with in a notebook like this. Negative instances By joining the corpus to the KB, we can obtain abundant positive instances for each relation. But a classifier cannot be trained on positive instances alone. In order to apply the distant supervision paradigm, we will also need some negative instances — that is, entity pairs which do not belong to any known relation. If you like, you can think of these entity pairs as being assigned to a special relation called NO_RELATION. We can find plenty of such pairs by searching for examples in the corpus which contain two entities which do not belong to any relation in the KB. End of explanation """ dataset.count_relation_combinations() """ Explanation: That's a lot of negative instances! In fact, because these negative instances far outnumber our positive instances (that is, the triples in our KB), when we train models we'll wind up downsampling the negative instances substantially. Remember, though, that some of these supposedly negative instances may be false negatives. Our KB is not complete. A pair of entities might be related in real life even if they don't appear together in the KB. Multi-label classification A given pair of entities can belong to more than one relation. In fact, this is quite common in our KB. End of explanation """ kbts_by_rel, labels_by_rel = dataset.build_dataset( include_positive=True, sampling_rate=0.1, seed=1) print(kbts_by_rel['adjoins'][0], labels_by_rel['adjoins'][0]) print(kbts_by_rel['capital'][637], labels_by_rel['capital'][637]) """ Explanation: While a few of those combinations look like data errors, most look natural and intuitive. Multiple relations per entity pair is a commonplace phenomenon. This observation strongly suggests formulating our prediction problem as multi-label classification. We could instead treat it as multi-class classification — and indeed, Mintz et al. 2009 did so — but if we do, we'll be faced with the problem of assigning a single relation label to entity pairs which actually belong to multiple relations. It's not obvious how best to do this (and Mintz et al. 2009 did not make their method clear). There are a number of ways to approach multi-label classification, but the most obvious is the binary relevance method, which just factors multi-label classification over n labels into n independent binary classification problems, one for each label. A disadvantage of this approach is that, by treating the binary classification problems as independent, it fails to exploit correlations between labels. But it has the great virtue of simplicity, and it will suffice for our purposes. So our problem will be to take as input an entity pair and a candidate relation (label), and to return a binary prediction as to whether the entity pair belongs to the relation. Since a KB triple is precisely a relation and a pair of entities, we could say equivalently that our prediction problem amounts to binary classification of KB triples. Given a candidate KB triple, do we predict that it is valid? Building datasets We're now in a position to write a function to build datasets suitable for training and evaluating predictive models. These datasets will have the following characteristics: Because we've formulated our problem as multi-label classification, and we'll be training separate models for each relation, we won't build a single dataset. Instead, we'll build a dataset for each relation, and our return value will be a map from relation names to datasets. The dataset for each relation will consist of two parallel lists: A list of candidate KBTriples which combine the given relation with a pair of entities. A corresponding list of boolean labels indicating whether the given KBTriple belongs to the KB. The dataset for each relation will include KBTriples derived from two sources: Positive instances will be drawn from the KB. Negative instances will be sampled from unrelated entity pairs, as described above. End of explanation """ splits = dataset.build_splits( split_names=['tiny', 'train', 'dev'], split_fracs=[0.01, 0.74, 0.25], seed=1) splits """ Explanation: [ top ] Evaluation Before we start building models, let's set up a test harness that allows us to measure a model's performance. This may seem backwards, but it's analogous to the software engineering paradigm of test-driven development: first, define success; then, pursue it. Splitting the data Whenever building a model from data, it's good practice to partition the data into a multiple splits — minimally, a training split on which to train the model, and a test split on which to evaluate it. In fact, we'll go a bit further, and define three splits: The tiny split (1%). It's often useful to carve out a tiny chunk of data to use in place of training or test data during development. Of course, any quantitative results obtained by evaluating on the tiny split are nearly meaningless, but because evaluations run extremely fast, using this split is a good way to flush out bugs during iterative cycles of code development. The train split (74%). We'll use the majority of our data for training models, both during development and at final evaluation. Experiments with the train split may take longer to run, but they'll have much greater statistical power. The dev split (25%). We'll use the dev split as test data for intermediate (formative) evaluations during development. During routine experiments, all evaluations should use the dev split. You could also carve out a test split for a final (summative) evaluation at the conclusion of your work. The bake-off will have its own test set, so you needn't do this, but this is an important step for projects without pre-defined test splits. Splitting our data assets is somewhat more complicated than in many other NLP problems, because we have both a corpus and KB. In order to minimize leakage of information from training data into test data, we'd like to split both the corpus and the KB. And in order to maximize the value of a finite quantity of data, we'd like to align the corpus splits and KB splits as closely as possible. In an ideal world, each split would have its own hermetically-sealed universe of entities, the corpus for that split would contain only examples mentioning those entities, and the KB for that split would contain only triples involving those entities. However, that ideal is not quite achievable in practice. In order to get as close as possible, we'll follow this plan: First, we'll split the set of entities which appear as the subject in some KB triple. Then, we'll split the set of KB triples based on their subject entity. Finally, we'll split the set of corpus examples. If the first entity in the example has already been assigned to a split, we'll assign the example to the same split. Alternatively, if the second entity has already been assigned to a split, we'll assign the example to the same split. Otherwise, we'll assign the example to a split randomly. <!-- \[ TODO: figure out whether we actually need to split the _corpus_ -- any lift from testing on train corpus? \] --> The Dataset method build_splits handles all of this: End of explanation """ def lift(f): return lambda xs: [f(x) for x in xs] def make_random_classifier(p=0.50): def random_classify(kb_triple): return random.random() < p return lift(random_classify) rel_ext.evaluate(splits, make_random_classifier()) """ Explanation: So now we can use splits['train'].corpus to refer to the training corpus, or splits['dev'].kb to refer to the dev KB. Choosing evaluation metrics Because we've formulated our prediction problem as a family of binary classification problems, one for each relation (label), choosing evaluation metrics is pretty straightforward. The standard metrics for evaluating binary classification are precision and recall, which are more meaningful than simple accuracy, particularly in problems with a highly biased label distribution (like ours). We'll compute and report precision and recall separately for each relation (label). There are only two wrinkles: How best to combine precision and recall into a single metric. Having two evaluation metrics is often inconvenient. If we're considering a change to our model which improves precision but degrades recall, should we take it? To drive an iterative development process, it's useful to have a single metric on which to hill-climb. For binary classification, the standard answer is the F<sub>1</sub>-score, which is the harmonic mean of precision and recall. However, the F<sub>1</sub>-score gives equal weight to precision and recall. For our purposes, precision is probably more important than recall. If we're extracting new relation triples from (massively abundant) text on the web in order to augment a knowledge base, it's probably more important that the triples we extract are correct (precision) than that we extract all the triples we could (recall). Accordingly, instead of the F<sub>1</sub>-score, we'll use the F<sub>0.5</sub>-score, which gives precision twice as much weight as recall. How to aggregate metrics across relations (labels). Reporting metrics separately for each relation is great, but in order to drive iterative development, we'd also like to have summary metrics which aggregate across all relations. There are two possible ways to do it: micro-averaging will give equal weight to all problem instances, and thus give greater weight to relations with more instances, while macro-averaging will give equal weight to all relations, and thus give lesser weight to problem instances in relations with more instances. Because the number of problem instances per relation is, to some degree, an accident of our data collection methodology, we'll choose macro-averaging. Thus, while every evaluation will report lots of metrics, when we need a single metric on which to hill-climb, it will be the macro-averaged F<sub>0.5</sub>-score. Running evaluations It's time to write some code to run evaluations and report results. This is now straightforward. The rel_ext.evaluate() function takes as inputs: splits: a dict mapping split names to Dataset instances classifier, which is just a function that takes a list of KBTriples and returns a list of boolean predictions test_split, the split on which to evaluate the classifier, dev by default verbose, a boolean indicating whether to print output Evaluating a random-guessing strategy In order to validate our evaluation framework, and to set a floor under expected results for future evaluations, let's implement and evaluate a random-guessing strategy. The random guesser is a classifier which completely ignores its input, and simply flips a coin. End of explanation """ def find_common_middles(split, top_k=3, show_output=False): corpus = split.corpus kb = split.kb mids_by_rel = { 'fwd': defaultdict(lambda: defaultdict(int)), 'rev': defaultdict(lambda: defaultdict(int))} for rel in kb.all_relations: for kbt in kb.get_triples_for_relation(rel): for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj): mids_by_rel['fwd'][rel][ex.middle] += 1 for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj): mids_by_rel['rev'][rel][ex.middle] += 1 def most_frequent(mid_counter): return sorted([(cnt, mid) for mid, cnt in mid_counter.items()], reverse=True)[:top_k] for rel in kb.all_relations: for dir in ['fwd', 'rev']: top = most_frequent(mids_by_rel[dir][rel]) if show_output: for cnt, mid in top: print('{:20s} {:5s} {:10d} {:s}'.format(rel, dir, cnt, mid)) mids_by_rel[dir][rel] = set([mid for cnt, mid in top]) return mids_by_rel _ = find_common_middles(splits['train'], show_output=True) """ Explanation: The results are not too surprising. Recall is generally around 0.50, which makes sense: on any given example with label True, we are 50% likely to guess the right label. But precision is very poor, because most labels are not True, and because our classifier is completely ignorant of the features of specific problem instances. Accordingly, the F<sub>0.5</sub>-score is also very poor — first because even the equally-weighted F<sub>1</sub>-score is always closer to the lesser of precision and recall, and second because the F<sub>0.5</sub>-score weights precision twice as much as recall. Actually, the most remarkable result in this table is the comparatively good performance for the contains relation! What does this result tell us about the data? [ top ] A simple baseline model It shouldn't be too hard to do better than random guessing. But for now, let's aim low — let's use the data we have in the easiest and most obvious way, and see how far that gets us. We start from the intuition that the words between two entity mentions frequently tell us how they're related. For example, in the phrase "SpaceX was founded by Elon Musk", the words "was founded by" indicate that the founders relation holds between the first entity mentioned and the second. Likewise, in the phrase "Elon Musk established SpaceX", the word "established" indicates the founders relation holds between the second entity mentioned and the first. So let's write some code to find the most common phrases that appear between the two entity mentions for each relation. As the examples illustrate, we need to make sure to consider both directions: that is, where the subject of the relation appears as the first mention, and where it appears as the second. End of explanation """ def train_top_k_middles_classifier(top_k=3): split = splits['train'] corpus = split.corpus top_k_mids_by_rel = find_common_middles(split=split, top_k=top_k) def classify(kb_triple): fwd_mids = top_k_mids_by_rel['fwd'][kb_triple.rel] rev_mids = top_k_mids_by_rel['rev'][kb_triple.rel] for ex in corpus.get_examples_for_entities(kb_triple.sbj, kb_triple.obj): if ex.middle in fwd_mids: return True for ex in corpus.get_examples_for_entities(kb_triple.obj, kb_triple.sbj): if ex.middle in rev_mids: return True return False return lift(classify) rel_ext.evaluate(splits, train_top_k_middles_classifier()) """ Explanation: A few observations here: Some of the most frequent middles are natural and intuitive. For example, ", son of" indicates a forward parents relation, while "and his son" indicates a reverse parents relation. Punctuation and stop words such as "and" and "of" are extremely common. Unlike some other NLP applications, it's probably a bad idea to throw these away — they carry lots of useful information. However, punctuation and stop words tend to be highly ambiguous. For example, a bare comma is a likely middle for almost every relation in at least one direction. A few of the results reflect quirks of the dataset. For example, the appearance of the phrase "in 1994 , he became a central figure in the" as a common middle for the genre relation reflects both the relative scarcity of examples for that relation, and an unfortunate tendency of the Wikilinks dataset to include duplicate or near-duplicate source documents. (That middle connects the entities Ready to Die — the first studio album by the Notorious B.I.G. — and East Coast hip hop.) Now it's straightforward task to build and evaluate a classifier which predicts True for a candidate KBTriple just in case its entities appear in the corpus connected by one of the phrases we just discovered. End of explanation """
beangoben/quantum_solar
Dia1/3_Graficame_Espectro_Solar.ipynb
mit
import numpy as np # modulo de computo numerico import matplotlib.pyplot as plt # modulo de graficas import pandas as pd # modulo de datos import seaborn as sns # esta linea hace que las graficas salgan en el notebook %matplotlib inline """ Explanation: Intro a Matplotlib Matplotlib = Libreria para graficas cosas matematicas Que es Matplotlib? Matplotlin es un libreria para crear imagenes 2D de manera facil. Checate mas en : Pagina oficial : http://matplotlib.org/ Galleria de ejemplo: http://matplotlib.org/gallery.html Una libreria mas avanzada que usa matplotlib, Seaborn: http://stanford.edu/~mwaskom/software/seaborn/ Libreria de visualizacion interactiva: http://bokeh.pydata.org/ Buenisimo Tutorial: http://www.labri.fr/perso/nrougier/teaching/matplotlib/ Para usar matplotlib, solo tiene que importar el modulo ..tambien te conviene importar numpy pues es muy util End of explanation """ import numericalunits as nu """ Explanation: Crear graficas (plot) Crear graficas es muy facil en matplotlib, si tienes una lista de valores X y otra y..solo basta usar : Podemos usar la funcion np.linspace para crear valores en un rango, por ejemplo si queremos 100 numeros entre 0 y 10 usamos: Y podemos graficar dos cosas al mismo tiempo: Que tal si queremos distinguir cada linea? Pues usamos legend(), de leyenda..tambien tenemos que agregarles nombres a cada plot Tambien podemos hacer mas cosas, como dibujar solamente los puntos, o las lineas con los puntos usando linestyle: Dibujando puntos (scatter) Aveces no queremos dibujar lineas, sino puntos, esto nos da informacion de donde se encuentras datos de manera espacial. Para esto podemos usarlo de la siguiente manera: Pero ademas podemos meter mas informacion, por ejemplo dar colores cada punto, o darle tamanos diferentes: Histogramas (hist) Los histogramas nos muestran distribuciones de datos, la forma de los datos, nos muestran el numero de datos de diferentes tipos: otro tipo de datos, tomados de una campana de gauss, es decir una distribucion normal: A graficar el Hermoso Espectro Solar Primero constantes numericas Para usarlo convertimos numeros a unidades, por ejemplo: x = 5 * nu.cm significa "x es igual a 5 centimetros". si quieres sacar el valor numerico de x, podemos dividir por las unidades y = x / nu.mm, en este caso tenemos el valor numerico en milimetros End of explanation """ import scipy.interpolate, scipy.integrate, wget, tarfile """ Explanation: Importamos varios paquetes de Python End of explanation """ tCell = 300 * nu.K """ Explanation: Definimos la Celda solar cualquiera a una temperatura de 300 kelvin: End of explanation """ data_url = 'http://rredc.nrel.gov/solar/spectra/am1.5/ASTMG173/compressed/ASTMG173.csv.tar' a_file = wget.download(data_url) download_as_tarfile_object = tarfile.open(fileobj=a_file) csv_file = download_as_tarfile_object.extractfile('ASTMG173.csv') """ Explanation: Bajar datos Aveces los datos que queremos se encuentran en el internet. Aqui usaremos datos del NREL (National Renewable Energy Laboratory): http://rredc.nrel.gov/solar/spectra/am1.5/ del espectro solar (AM1.5G) con intensity (1000 W/m2). Primero lo bajamos y lo descomprimimos: End of explanation """ downloaded_array = np.genfromtxt('ASTMG173.csv', delimiter=",", skip_header=2) downloaded_array.shape """ Explanation: Que tamanio tienen los datos? End of explanation """ # Wavelength is in column 0, AM1.5G data is column 2 AM15 = downloaded_array[:,[0,2]] # The first line should be 280.0 , 4.7309E-23 # The last line should be 4000.0, 7.1043E-03 print(AM15) """ Explanation: Manipular datos La columna 0 es la longitud de onda y la 2 es los datos AM1.5G End of explanation """ AM15[:,0] *= nu.nm AM15[:,1] *= nu.W * nu.m**-2 * nu.nm**-1 """ Explanation: Vamos a dar unidades a cada columna End of explanation """ wavelength_min = wavelength_max = E_min = nu.hPlanck * nu.c0 E_max = nu.hPlanck * nu.c0 """ Explanation: Limites de los datos Para los limites de numeros onda ($\lambda$), podemos usar np.min y np.max. Para la energia usaremos la formula $$ E = \frac{\hbar c_0}{\lambda} $$ End of explanation """ AM15interp = scipy.interpolate.interp1d(AM15[:,0], AM15[:,1]) """ Explanation: Creamos una funcion, interpolando valores intermedios End of explanation """
tensorflow/docs-l10n
site/en-snapshot/guide/estimator.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ !pip install -U tensorflow_datasets import tempfile import os import tensorflow as tf import tensorflow_datasets as tfds """ Explanation: Estimators <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/estimator"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/estimator.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/estimator.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/estimator.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Warning: Estimators are not recommended for new code. Estimators run v1.Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our compatibility guarantees, but will receive no fixes other than security vulnerabilities. See the migration guide for details. This document introduces tf.estimator—a high-level TensorFlow API. Estimators encapsulate the following actions: Training Evaluation Prediction Export for serving TensorFlow implements several pre-made Estimators. Custom estimators are still suported, but mainly as a backwards compatibility measure. Custom estimators should not be used for new code. All Estimators—pre-made or custom ones—are classes based on the tf.estimator.Estimator class. For a quick example, try Estimator tutorials. For an overview of the API design, check the white paper. Setup End of explanation """ def train_input_fn(): titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv") titanic = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=32, label_name="survived") titanic_batches = ( titanic.cache().repeat().shuffle(500) .prefetch(tf.data.AUTOTUNE)) return titanic_batches """ Explanation: Advantages Similar to a tf.keras.Model, an estimator is a model-level abstraction. The tf.estimator provides some capabilities currently still under development for tf.keras. These are: Parameter server based training Full TFX integration Estimators Capabilities Estimators provide the following benefits: You can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model. Estimators provide a safe distributed training loop that controls how and when to: Load data Handle exceptions Create checkpoint files and recover from failures Save summaries for TensorBoard When writing an application with Estimators, you must separate the data input pipeline from the model. This separation simplifies experiments with different datasets. Using pre-made Estimators Pre-made Estimators enable you to work at a much higher conceptual level than the base TensorFlow APIs. You no longer have to worry about creating the computational graph or sessions since Estimators handle all the "plumbing" for you. Furthermore, pre-made Estimators let you experiment with different model architectures by making only minimal code changes. tf.estimator.DNNClassifier, for example, is a pre-made Estimator class that trains classification models based on dense, feed-forward neural networks. A TensorFlow program relying on a pre-made Estimator typically consists of the following four steps: 1. Write an input functions For example, you might create one function to import the training set and another function to import the test set. Estimators expect their inputs to be formatted as a pair of objects: A dictionary in which the keys are feature names and the values are Tensors (or SparseTensors) containing the corresponding feature data A Tensor containing one or more labels The input_fn should return a tf.data.Dataset that yields pairs in that format. For example, the following code builds a tf.data.Dataset from the Titanic dataset's train.csv file: End of explanation """ age = tf.feature_column.numeric_column('age') cls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) embark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32) """ Explanation: The input_fn is executed in a tf.Graph and can also directly return a (features_dics, labels) pair containing graph tensors, but this is error prone outside of simple cases like returning constants. 2. Define the feature columns. Each tf.feature_column identifies a feature name, its type, and any input pre-processing. For example, the following snippet creates three feature columns. The first uses the age feature directly as a floating-point input. The second uses the class feature as a categorical input. The third uses the embark_town as a categorical input, but uses the hashing trick to avoid the need to enumerate the options, and to set the number of options. For further information, check the feature columns tutorial. End of explanation """ model_dir = tempfile.mkdtemp() model = tf.estimator.LinearClassifier( model_dir=model_dir, feature_columns=[embark, cls, age], n_classes=2 ) """ Explanation: 3. Instantiate the relevant pre-made Estimator. For example, here's a sample instantiation of a pre-made Estimator named LinearClassifier: End of explanation """ model = model.train(input_fn=train_input_fn, steps=100) result = model.evaluate(train_input_fn, steps=10) for key, value in result.items(): print(key, ":", value) for pred in model.predict(train_input_fn): for key, value in pred.items(): print(key, ":", value) break """ Explanation: For more information, you can go the linear classifier tutorial. 4. Call a training, evaluation, or inference method. All Estimators provide train, evaluate, and predict methods. End of explanation """ keras_mobilenet_v2 = tf.keras.applications.MobileNetV2( input_shape=(160, 160, 3), include_top=False) keras_mobilenet_v2.trainable = False estimator_model = tf.keras.Sequential([ keras_mobilenet_v2, tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(1) ]) # Compile the model estimator_model.compile( optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) """ Explanation: Benefits of pre-made Estimators Pre-made Estimators encode best practices, providing the following benefits: Best practices for determining where different parts of the computational graph should run, implementing strategies on a single machine or on a cluster. Best practices for event (summary) writing and universally useful summaries. If you don't use pre-made Estimators, you must implement the preceding features yourself. Custom Estimators The heart of every Estimator—whether pre-made or custom—is its model function, model_fn, which is a method that builds graphs for training, evaluation, and prediction. When you are using a pre-made Estimator, someone else has already implemented the model function. When relying on a custom Estimator, you must write the model function yourself. Note: A custom model_fn will still run in 1.x-style graph mode. This means there is no eager execution and no automatic control dependencies. You should plan to migrate away from tf.estimator with custom model_fn. The alternative APIs are tf.keras and tf.distribute. If you still need an Estimator for some part of your training you can use the tf.keras.estimator.model_to_estimator converter to create an Estimator from a keras.Model. Create an Estimator from a Keras model You can convert existing Keras models to Estimators with tf.keras.estimator.model_to_estimator. This is helpful if you want to modernize your model code, but your training pipeline still requires Estimators. Instantiate a Keras MobileNet V2 model and compile the model with the optimizer, loss, and metrics to train with: End of explanation """ est_mobilenet_v2 = tf.keras.estimator.model_to_estimator(keras_model=estimator_model) """ Explanation: Create an Estimator from the compiled Keras model. The initial model state of the Keras model is preserved in the created Estimator: End of explanation """ IMG_SIZE = 160 # All images will be resized to 160x160 def preprocess(image, label): image = tf.cast(image, tf.float32) image = (image/127.5) - 1 image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE)) return image, label def train_input_fn(batch_size): data = tfds.load('cats_vs_dogs', as_supervised=True) train_data = data['train'] train_data = train_data.map(preprocess).shuffle(500).batch(batch_size) return train_data """ Explanation: Treat the derived Estimator as you would with any other Estimator. End of explanation """ est_mobilenet_v2.train(input_fn=lambda: train_input_fn(32), steps=50) """ Explanation: To train, call Estimator's train function: End of explanation """ est_mobilenet_v2.evaluate(input_fn=lambda: train_input_fn(32), steps=10) """ Explanation: Similarly, to evaluate, call the Estimator's evaluate function: End of explanation """ import tensorflow.compat.v1 as tf_compat def toy_dataset(): inputs = tf.range(10.)[:, None] labels = inputs * 5. + tf.range(5.)[None, :] return tf.data.Dataset.from_tensor_slices( dict(x=inputs, y=labels)).repeat().batch(2) class Net(tf.keras.Model): """A simple linear model.""" def __init__(self): super(Net, self).__init__() self.l1 = tf.keras.layers.Dense(5) def call(self, x): return self.l1(x) def model_fn(features, labels, mode): net = Net() opt = tf.keras.optimizers.Adam(0.1) ckpt = tf.train.Checkpoint(step=tf_compat.train.get_global_step(), optimizer=opt, net=net) with tf.GradientTape() as tape: output = net(features['x']) loss = tf.reduce_mean(tf.abs(output - features['y'])) variables = net.trainable_variables gradients = tape.gradient(loss, variables) return tf.estimator.EstimatorSpec( mode, loss=loss, train_op=tf.group(opt.apply_gradients(zip(gradients, variables)), ckpt.step.assign_add(1)), # Tell the Estimator to save "ckpt" in an object-based format. scaffold=tf_compat.train.Scaffold(saver=ckpt)) tf.keras.backend.clear_session() est = tf.estimator.Estimator(model_fn, './tf_estimator_example/') est.train(toy_dataset, steps=10) """ Explanation: For more details, please refer to the documentation for tf.keras.estimator.model_to_estimator. Saving object-based checkpoints with Estimator Estimators by default save checkpoints with variable names rather than the object graph described in the Checkpoint guide. tf.train.Checkpoint will read name-based checkpoints, but variable names may change when moving parts of a model outside of the Estimator's model_fn. For forwards compatibility saving object-based checkpoints makes it easier to train a model inside an Estimator and then use it outside of one. End of explanation """ opt = tf.keras.optimizers.Adam(0.1) net = Net() ckpt = tf.train.Checkpoint( step=tf.Variable(1, dtype=tf.int64), optimizer=opt, net=net) ckpt.restore(tf.train.latest_checkpoint('./tf_estimator_example/')) ckpt.step.numpy() # From est.train(..., steps=10) """ Explanation: tf.train.Checkpoint can then load the Estimator's checkpoints from its model_dir. End of explanation """ input_column = tf.feature_column.numeric_column("x") estimator = tf.estimator.LinearClassifier(feature_columns=[input_column]) def input_fn(): return tf.data.Dataset.from_tensor_slices( ({"x": [1., 2., 3., 4.]}, [1, 1, 0, 0])).repeat(200).shuffle(64).batch(16) estimator.train(input_fn) """ Explanation: SavedModels from Estimators Estimators export SavedModels through tf.Estimator.export_saved_model. End of explanation """ tmpdir = tempfile.mkdtemp() serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( tf.feature_column.make_parse_example_spec([input_column])) estimator_base_path = os.path.join(tmpdir, 'from_estimator') estimator_path = estimator.export_saved_model(estimator_base_path, serving_input_fn) """ Explanation: To save an Estimator you need to create a serving_input_receiver. This function builds a part of a tf.Graph that parses the raw data received by the SavedModel. The tf.estimator.export module contains functions to help build these receivers. The following code builds a receiver, based on the feature_columns, that accepts serialized tf.Example protocol buffers, which are often used with tf-serving. End of explanation """ imported = tf.saved_model.load(estimator_path) def predict(x): example = tf.train.Example() example.features.feature["x"].float_list.value.extend([x]) return imported.signatures["predict"]( examples=tf.constant([example.SerializeToString()])) print(predict(1.5)) print(predict(3.5)) """ Explanation: You can also load and run that model, from python: End of explanation """ mirrored_strategy = tf.distribute.MirroredStrategy() config = tf.estimator.RunConfig( train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy) regressor = tf.estimator.LinearRegressor( feature_columns=[tf.feature_column.numeric_column('feats')], optimizer='SGD', config=config) """ Explanation: tf.estimator.export.build_raw_serving_input_receiver_fn allows you to create input functions which take raw tensors rather than tf.train.Examples. Using tf.distribute.Strategy with Estimator (Limited support) tf.estimator is a distributed training TensorFlow API that originally supported the async parameter server approach. tf.estimator now supports tf.distribute.Strategy. If you're using tf.estimator, you can change to distributed training with very few changes to your code. With this, Estimator users can now do synchronous distributed training on multiple GPUs and multiple workers, as well as use TPUs. This support in Estimator is, however, limited. Check out the What's supported now section below for more details. Using tf.distribute.Strategy with Estimator is slightly different than in the Keras case. Instead of using strategy.scope, now you pass the strategy object into the RunConfig for the Estimator. You can refer to the distributed training guide for more information. Here is a snippet of code that shows this with a premade Estimator LinearRegressor and MirroredStrategy: End of explanation """ def input_fn(): dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.])) return dataset.repeat(1000).batch(10) regressor.train(input_fn=input_fn, steps=10) regressor.evaluate(input_fn=input_fn, steps=10) """ Explanation: Here, you use a premade Estimator, but the same code works with a custom Estimator as well. train_distribute determines how training will be distributed, and eval_distribute determines how evaluation will be distributed. This is another difference from Keras where you use the same strategy for both training and eval. Now you can train and evaluate this Estimator with an input function: End of explanation """
tensorflow/docs-l10n
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import numpy as np """ Explanation: Writing a training loop from scratch <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/writing_a_training_loop_from_scratch.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/keras-team/keras-io/blob/master/guides/writing_a_training_loop_from_scratch.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/writing_a_training_loop_from_scratch.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Setup End of explanation """ inputs = keras.Input(shape=(784,), name="digits") x1 = layers.Dense(64, activation="relu")(inputs) x2 = layers.Dense(64, activation="relu")(x1) outputs = layers.Dense(10, name="predictions")(x2) model = keras.Model(inputs=inputs, outputs=outputs) """ Explanation: Introduction Keras provides default training and evaluation loops, fit() and evaluate(). Their usage is covered in the guide Training & evaluation with the built-in methods. If you want to customize the learning algorithm of your model while still leveraging the convenience of fit() (for instance, to train a GAN using fit()), you can subclass the Model class and implement your own train_step() method, which is called repeatedly during fit(). This is covered in the guide Customizing what happens in fit(). Now, if you want very low-level control over training & evaluation, you should write your own training & evaluation loops from scratch. This is what this guide is about. Using the GradientTape: a first end-to-end example Calling a model inside a GradientTape scope enables you to retrieve the gradients of the trainable weights of the layer with respect to a loss value. Using an optimizer instance, you can use these gradients to update these variables (which you can retrieve using model.trainable_weights). Let's consider a simple MNIST model: End of explanation """ # Instantiate an optimizer. optimizer = keras.optimizers.SGD(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # Prepare the training dataset. batch_size = 64 (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = np.reshape(x_train, (-1, 784)) x_test = np.reshape(x_test, (-1, 784)) # Reserve 10,000 samples for validation. x_val = x_train[-10000:] y_val = y_train[-10000:] x_train = x_train[:-10000] y_train = y_train[:-10000] # Prepare the training dataset. train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size) # Prepare the validation dataset. val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(batch_size) """ Explanation: Let's train it using mini-batch gradient with a custom training loop. First, we're going to need an optimizer, a loss function, and a dataset: End of explanation """ epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): # Open a GradientTape to record the operations run # during the forward pass, which enables auto-differentiation. with tf.GradientTape() as tape: # Run the forward pass of the layer. # The operations that the layer applies # to its inputs are going to be recorded # on the GradientTape. logits = model(x_batch_train, training=True) # Logits for this minibatch # Compute the loss value for this minibatch. loss_value = loss_fn(y_batch_train, logits) # Use the gradient tape to automatically retrieve # the gradients of the trainable variables with respect to the loss. grads = tape.gradient(loss_value, model.trainable_weights) # Run one step of gradient descent by updating # the value of the variables to minimize the loss. optimizer.apply_gradients(zip(grads, model.trainable_weights)) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %s samples" % ((step + 1) * batch_size)) """ Explanation: Here's our training loop: We open a for loop that iterates over epochs For each epoch, we open a for loop that iterates over the dataset, in batches For each batch, we open a GradientTape() scope Inside this scope, we call the model (forward pass) and compute the loss Outside the scope, we retrieve the gradients of the weights of the model with regard to the loss Finally, we use the optimizer to update the weights of the model based on the gradients End of explanation """ # Get model inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu", name="dense_1")(inputs) x = layers.Dense(64, activation="relu", name="dense_2")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) # Instantiate an optimizer to train the model. optimizer = keras.optimizers.SGD(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # Prepare the metrics. train_acc_metric = keras.metrics.SparseCategoricalAccuracy() val_acc_metric = keras.metrics.SparseCategoricalAccuracy() """ Explanation: Low-level handling of metrics Let's add metrics monitoring to this basic loop. You can readily reuse the built-in metrics (or custom ones you wrote) in such training loops written from scratch. Here's the flow: Instantiate the metric at the start of the loop Call metric.update_state() after each batch Call metric.result() when you need to display the current value of the metric Call metric.reset_states() when you need to clear the state of the metric (typically at the end of an epoch) Let's use this knowledge to compute SparseCategoricalAccuracy on validation data at the end of each epoch: End of explanation """ import time epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) start_time = time.time() # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): with tf.GradientTape() as tape: logits = model(x_batch_train, training=True) loss_value = loss_fn(y_batch_train, logits) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) # Update training metric. train_acc_metric.update_state(y_batch_train, logits) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %d samples" % ((step + 1) * batch_size)) # Display metrics at the end of each epoch. train_acc = train_acc_metric.result() print("Training acc over epoch: %.4f" % (float(train_acc),)) # Reset training metrics at the end of each epoch train_acc_metric.reset_states() # Run a validation loop at the end of each epoch. for x_batch_val, y_batch_val in val_dataset: val_logits = model(x_batch_val, training=False) # Update val metrics val_acc_metric.update_state(y_batch_val, val_logits) val_acc = val_acc_metric.result() val_acc_metric.reset_states() print("Validation acc: %.4f" % (float(val_acc),)) print("Time taken: %.2fs" % (time.time() - start_time)) """ Explanation: Here's our training & evaluation loop: End of explanation """ @tf.function def train_step(x, y): with tf.GradientTape() as tape: logits = model(x, training=True) loss_value = loss_fn(y, logits) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) train_acc_metric.update_state(y, logits) return loss_value """ Explanation: Speeding-up your training step with tf.function The default runtime in TensorFlow 2 is eager execution. As such, our training loop above executes eagerly. This is great for debugging, but graph compilation has a definite performance advantage. Describing your computation as a static graph enables the framework to apply global performance optimizations. This is impossible when the framework is constrained to greedly execute one operation after another, with no knowledge of what comes next. You can compile into a static graph any function that takes tensors as input. Just add a @tf.function decorator on it, like this: End of explanation """ @tf.function def test_step(x, y): val_logits = model(x, training=False) val_acc_metric.update_state(y, val_logits) """ Explanation: Let's do the same with the evaluation step: End of explanation """ import time epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) start_time = time.time() # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): loss_value = train_step(x_batch_train, y_batch_train) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %d samples" % ((step + 1) * batch_size)) # Display metrics at the end of each epoch. train_acc = train_acc_metric.result() print("Training acc over epoch: %.4f" % (float(train_acc),)) # Reset training metrics at the end of each epoch train_acc_metric.reset_states() # Run a validation loop at the end of each epoch. for x_batch_val, y_batch_val in val_dataset: test_step(x_batch_val, y_batch_val) val_acc = val_acc_metric.result() val_acc_metric.reset_states() print("Validation acc: %.4f" % (float(val_acc),)) print("Time taken: %.2fs" % (time.time() - start_time)) """ Explanation: Now, let's re-run our training loop with this compiled training step: End of explanation """ class ActivityRegularizationLayer(layers.Layer): def call(self, inputs): self.add_loss(1e-2 * tf.reduce_sum(inputs)) return inputs """ Explanation: Much faster, isn't it? Low-level handling of losses tracked by the model Layers & models recursively track any losses created during the forward pass by layers that call self.add_loss(value). The resulting list of scalar loss values are available via the property model.losses at the end of the forward pass. If you want to be using these loss components, you should sum them and add them to the main loss in your training step. Consider this layer, that creates an activity regularization loss: End of explanation """ inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu")(inputs) # Insert activity regularization as a layer x = ActivityRegularizationLayer()(x) x = layers.Dense(64, activation="relu")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) """ Explanation: Let's build a really simple model that uses it: End of explanation """ @tf.function def train_step(x, y): with tf.GradientTape() as tape: logits = model(x, training=True) loss_value = loss_fn(y, logits) # Add any extra losses created during the forward pass. loss_value += sum(model.losses) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) train_acc_metric.update_state(y, logits) return loss_value """ Explanation: Here's what our training step should look like now: End of explanation """ discriminator = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.GlobalMaxPooling2D(), layers.Dense(1), ], name="discriminator", ) discriminator.summary() """ Explanation: Summary Now you know everything there is to know about using built-in training loops and writing your own from scratch. To conclude, here's a simple end-to-end example that ties together everything you've learned in this guide: a DCGAN trained on MNIST digits. End-to-end example: a GAN training loop from scratch You may be familiar with Generative Adversarial Networks (GANs). GANs can generate new images that look almost real, by learning the latent distribution of a training dataset of images (the "latent space" of the images). A GAN is made of two parts: a "generator" model that maps points in the latent space to points in image space, a "discriminator" model, a classifier that can tell the difference between real images (from the training dataset) and fake images (the output of the generator network). A GAN training loop looks like this: 1) Train the discriminator. - Sample a batch of random points in the latent space. - Turn the points into fake images via the "generator" model. - Get a batch of real images and combine them with the generated images. - Train the "discriminator" model to classify generated vs. real images. 2) Train the generator. - Sample random points in the latent space. - Turn the points into fake images via the "generator" network. - Get a batch of real images and combine them with the generated images. - Train the "generator" model to "fool" the discriminator and classify the fake images as real. For a much more detailed overview of how GANs works, see Deep Learning with Python. Let's implement this training loop. First, create the discriminator meant to classify fake vs real digits: End of explanation """ latent_dim = 128 generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), # We want to generate 128 coefficients to reshape into a 7x7x128 map layers.Dense(7 * 7 * 128), layers.LeakyReLU(alpha=0.2), layers.Reshape((7, 7, 128)), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"), ], name="generator", ) """ Explanation: Then let's create a generator network, that turns latent vectors into outputs of shape (28, 28, 1) (representing MNIST digits): End of explanation """ # Instantiate one optimizer for the discriminator and another for the generator. d_optimizer = keras.optimizers.Adam(learning_rate=0.0003) g_optimizer = keras.optimizers.Adam(learning_rate=0.0004) # Instantiate a loss function. loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) @tf.function def train_step(real_images): # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) # Decode them to fake images generated_images = generator(random_latent_vectors) # Combine them with real images combined_images = tf.concat([generated_images, real_images], axis=0) # Assemble labels discriminating real from fake images labels = tf.concat( [tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0 ) # Add random noise to the labels - important trick! labels += 0.05 * tf.random.uniform(labels.shape) # Train the discriminator with tf.GradientTape() as tape: predictions = discriminator(combined_images) d_loss = loss_fn(labels, predictions) grads = tape.gradient(d_loss, discriminator.trainable_weights) d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights)) # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) # Assemble labels that say "all real images" misleading_labels = tf.zeros((batch_size, 1)) # Train the generator (note that we should *not* update the weights # of the discriminator)! with tf.GradientTape() as tape: predictions = discriminator(generator(random_latent_vectors)) g_loss = loss_fn(misleading_labels, predictions) grads = tape.gradient(g_loss, generator.trainable_weights) g_optimizer.apply_gradients(zip(grads, generator.trainable_weights)) return d_loss, g_loss, generated_images """ Explanation: Here's the key bit: the training loop. As you can see it is quite straightforward. The training step function only takes 17 lines. End of explanation """ import os # Prepare the dataset. We use both the training & test MNIST digits. batch_size = 64 (x_train, _), (x_test, _) = keras.datasets.mnist.load_data() all_digits = np.concatenate([x_train, x_test]) all_digits = all_digits.astype("float32") / 255.0 all_digits = np.reshape(all_digits, (-1, 28, 28, 1)) dataset = tf.data.Dataset.from_tensor_slices(all_digits) dataset = dataset.shuffle(buffer_size=1024).batch(batch_size) epochs = 1 # In practice you need at least 20 epochs to generate nice digits. save_dir = "./" for epoch in range(epochs): print("\nStart epoch", epoch) for step, real_images in enumerate(dataset): # Train the discriminator & generator on one batch of real images. d_loss, g_loss, generated_images = train_step(real_images) # Logging. if step % 200 == 0: # Print metrics print("discriminator loss at step %d: %.2f" % (step, d_loss)) print("adversarial loss at step %d: %.2f" % (step, g_loss)) # Save one generated image img = tf.keras.preprocessing.image.array_to_img( generated_images[0] * 255.0, scale=False ) img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png")) # To limit execution time we stop after 10 steps. # Remove the lines below to actually train the model! if step > 10: break """ Explanation: Let's train our GAN, by repeatedly calling train_step on batches of images. Since our discriminator and generator are convnets, you're going to want to run this code on a GPU. End of explanation """
gertingold/lit2015
pugmuc2015.ipynb
mit
for n in range(3): print("The IPython notebook is great.") """ Explanation: Working with the IPython notebook Gert-Ludwig Ingold <div style="margin-top:10ex;font-size:smaller">source: `git clone https://github.com/gertingold/lit2015`</div> <div style="font-size:smaller">static view: http://nbviewer.ipython.org/github/gertingold/lit2015/blob/master/pugmuc2015.ipynb</div> <div style="margin-top:5ex;font-size:smaller;text-align:right">Python and Plone User Group Meeting, Munich, 6/2/2015</div> From Newton's notebook <img src="images/MS-ADD-04004_25_detail.png"> From Newton to a modern notebook explanations text which may be structured and contain formulae mathematical manipulations code and results diagrams graphical representations and multimedia objects representation HTML, PDF, … Applications of notebooks Development of small Python scripts example: optimization for Cython Documentation example: data analysis Teaching material examples: lectures, programming course Textbooks examples: nbviewer.ipython.org, books section Presentation with notebook extension RISE by Damián Avila … Python interpreted language relatively easy to learn "Python comes with batteries included" SciPy stack: NumPy, SciPy, Matplotlib, pandas,... IPython - the improved Python shell 2001: Fernando Pérez launches the IPython project. December 2011: IPython 0.12 introduces the IPython notebook. 2013–2014: The development of IPython is supported by the Alfred P. Sloan foundation with 1.15 M$. August 2013: Microsoft supports the development of IPython with 100.000$. February, 27 2015: Release of Version 3.0. May 2015: IPython is part of the Horizon 2020 project OpenDreamKit Next milestone: The language agnostic part moves to the Jupyter project. IPython sources Homepage: ipython.org Repository: github.com/ipython/ipython Mailing list: ipython-dev@scipy.org Debian and Ubuntu packages: ipython-notebook / ipython3-notebook ipython 1.2.1: Debian wheezy-backports, Ubuntu 14.04LTS ipython 2.3.0: Debian jessie, Ubuntu 15.04 ipython 3.1.0: pypi.python.org Installation within a virtual environment pip install "ipython[notebook]" see also: ipython.org/install.html Notebook cells Code cells Text cells Raw cells for interpretation by NBConvert Working with notebook cells A selected notebook cell is in one of two modes: command mode = black frame input mode = green frame and pencil symbol in header switch to input mode: ENTER or doubleclick switch to command mode: ESC or CTRL-M Useful keyboard shortcuts SHIFT-ENTER, CTRL-ENTER: execute the selected cell ALT-ENTER: execute the selected cell and open a new one A: insert a new cell above the present cell B: insert a new cell below the present cell D,D: delete the selected cell M: define selected cell as markdown cell H: display all keyboard shortcuts Code cells End of explanation """ %%html <style> div.text_cell_render h3 { color: #c60; } </style> """ Explanation: Code cells are numbered in the sequence in which they are executed. Magics can be used to insert and execute code not written in Python, e.g. HTML: End of explanation """ from IPython.external import mathjax mathjax? """ Explanation: Text cells Formatting can be down in markdown and HTML. Examples: * text in italics oder text in italics * text in bold oder text in bold * code * <span style="color:white;background-color:#c00">emphasized text</span> Mathematical typesetting LaTeX syntax can be used in text cells to display mathematical symbols like $\ddot x$ or entire formulae: $$\mathcal{L}{f(t)} = \int_0^\infty\text{d}z\text{e}^{-zt}f(t)$$ Mathematics is displayed with MathJax (www.mathjax.org) and requires either an internet connection or a local installation. Instructions for a local installation can be obtained as follows: End of explanation """ import numpy as np np.tensordot? """ Explanation: Selected features of the IPython shell Help End of explanation """ np.tensordot?? """ Explanation: Description including code (if available) End of explanation """ np.ALLOW_THREADS """ Explanation: Code completion with TAB End of explanation """ 2**3 _-8 __**2 """ Explanation: Reference to earlier results End of explanation """ In, Out """ Explanation: Access to all earlier input and output End of explanation """ %lsmagic """ Explanation: Magics in IPython End of explanation """ %quickref """ Explanation: Quick reference End of explanation """ %timeit 2.5**100 import math %%timeit result = [] nmax = 100000 dx = 0.001 for n in range(nmax): result.append(math.sin(n*dx)) %%timeit nmax = 100000 dx = 0.001 x = np.arange(nmax)*dx result = np.sin(x) """ Explanation: Timing of code execution End of explanation """ from IPython.display import Image Image("./images/ipython_logo.png") from IPython.display import HTML HTML('<iframe src="http://www.ipython.org" width="700" height="500"></iframe>') """ Explanation: Extended representations IPython allows for the representation of objects in formats as different as HTML Markdown SVG PNG JPEG LaTeX End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('F4rFuIb1Ie4') """ Explanation: Even the embedding of audio and video files is possible. End of explanation """ class MyObject(object): def __init__(self, obj): self.obj = obj def __repr__(self): return ">>> {0!r} / {0!s} <<<".format(self.obj) x = MyObject('Python') print(x) """ Explanation: Python allows for a textual representation of objects by means of the __repr__ method. example: End of explanation """ class RGBColor(object): def __init__(self, r, g, b): self.colordict = {"r": r, "g":g, "b": b} def _repr_svg_(self): return '''<svg height="50" width="50"> <rect width="50" height="50" fill="rgb({r},{g},{b})" /> </svg>'''.format(**self.colordict) c = RGBColor(205, 128, 255) c from fractions import Fraction class MyFraction(Fraction): def _repr_html_(self): return "<sup>%s</sup>&frasl;<sub>%s</sub>" % (self.numerator, self.denominator) def _repr_latex_(self): return r"$\frac{%s}{%s}$" % (self.numerator, self.denominator) def __add__(a, b): """a + b""" return MyFraction(a.numerator * b.denominator + b.numerator * a.denominator, a.denominator * b.denominator) MyFraction(12, 345)+MyFraction(67, 89) from IPython.display import display_latex display_latex(MyFraction(12, 345)+MyFraction(67, 89)) """ Explanation: A rich representation of objects is possible in the IPython notebook provided the corresponding methods are defined: _repr_pretty_ _repr_html_ _repr_markdown_ _repr_latex _repr_svg_ _repr_json_ _repr_javascript_ _repr_png_ _repr_jpeg_ Note: In contrast to __repr__ only one underscore is used. End of explanation """ from IPython.html.widgets import interact @interact(x=(0., 10.), y=(0, 10)) def power(y, x=2): print(x**y) """ Explanation: Interaction with widgets End of explanation """ @interact(x=(0, 5), text="Python is great!!!") def f(text, x=0): for _ in range(x): print(text) from IPython.html import widgets import numpy as np import matplotlib.pyplot as plt %matplotlib inline # otherwise matplotlib graphs will be displayed in an external window @interact(harmonics=widgets.IntSlider(min=1, max=10, description='Number of harmonics', padding='2ex'), function=widgets.RadioButtons(options=("square", "sawtooth", "triangle"), description='Function') ) def f(harmonics, function): params = {"square": {"sign":1, "stepsize": 2, "func": np.sin, "power": 1}, "sawtooth": {"sign": -1, "stepsize": 1, "func": np.sin, "power": 1}, "triangle": {"sign": 1, "stepsize": 2, "func": np.cos, "power": 2} } p = params[function] xvals, nvals = np.ogrid[-2*np.pi:2*np.pi:100j, 1:harmonics+1:p["stepsize"]] yvals = np.sum(p["sign"]**nvals*p["func"](nvals*xvals)/nvals**p["power"], axis=1) plt.plot(xvals, yvals) """ Explanation: Data types and their associated widgets String (str, unicode) → Text Dictionary (dict) → Dropdown Boolean variable (bool) → Checkbox Float (float) → FloatSlider Integer (int) → IntSlider End of explanation """
BBN-Q/Auspex
doc/examples/Example-Filter-Pipeline.ipynb
apache-2.0
from QGL import * cl = ChannelLibrary(":memory:") # Create five qubits and supporting hardware for i in range(5): q1 = cl.new_qubit(f"q{i}") cl.new_APS2(f"BBNAPS2-{2*i+1}", address=f"192.168.5.{101+2*i}") cl.new_APS2(f"BBNAPS2-{2*i+2}", address=f"192.168.5.{102+2*i}") cl.new_X6(f"X6_{i}", address=0) cl.new_source(f"Holz{2*i+1}", "HolzworthHS9000", f"HS9004A-009-{2*i}", power=-30) cl.new_source(f"Holz{2*i+2}", "HolzworthHS9000", f"HS9004A-009-{2*i+1}", power=-30) cl.set_control(cl[f"q{i}"], cl[f"BBNAPS2-{2*i+1}"], generator=cl[f"Holz{2*i+1}"]) cl.set_measure(cl[f"q{i}"], cl[f"BBNAPS2-{2*i+2}"], cl[f"X6_{i}"][1], generator=cl[f"Holz{2*i+2}"]) cl.set_master(cl["BBNAPS2-1"], cl["BBNAPS2-1"].ch("m2")) cl.commit() """ Explanation: Example Q3: Managing the Filter Pipeline This example notebook shows how to use the PipelineManager to modify the signal processing on qubit data. © Raytheon BBN Technologies 2018 We initialize a slightly more advanced channel library: End of explanation """ from auspex.qubit import * """ Explanation: Creating the Default Filter Pipeline End of explanation """ pl = PipelineManager() """ Explanation: The PipelineManager is analogous to the ChannelLibrary insomuchas it provides the user with an interface to programmatically modify the filter pipeline, and to save and load different versions of the pipeline. End of explanation """ pl.create_default_pipeline() pl.show_pipeline() """ Explanation: Pipelines are fairly predictable, and will provide some subset of the functionality of demodulating, integrating, average, and writing to file. Some of these can be done on hardware, some in software. The PipelineManager can guess what the user wants for a particular qubit by inspecting which equipment has been assigned to it using the set_measure command for the ChannelLibrary. For example, this ChannelLibrary has defined X6-1000M cards for readout, and the description of this instrument indicates that the highest level available stream is integrated. Thus, the PipelineManager automatically inserts the remaining averager and writer. End of explanation """ pl.add_qubit_pipeline("q1", "demodulated") pl.show_pipeline() """ Explanation: Sometimes, for debugging purposes, one may wish to add multiple pipelines per qubit. Additional pipelines can be added explicitly by running: End of explanation """ pl.ls() """ Explanation: End of explanation """ pl["q1 integrated"].print() """ Explanation: We can print the properties of a single node End of explanation """ pl.print("q1 integrated") """ Explanation: We can print the properties of individual filters or subgraphs: End of explanation """ pl["q1 integrated"]["Average"]["Write"].filename = "new.h5" pl.print("q1 integrated") """ Explanation: Dictionary access is provided to allow drilling down into the pipelines. One can use the specific label of a filter or simple its type in this access mode: End of explanation """ cl.commit() pl.print("q1 integrated") """ Explanation: Here uncommitted changes are shown. This can be rectified in the standard way: End of explanation """ pl.commit() pl.save_as("simple") pl["q1 demodulated"].clear_pipeline() pl["q1 demodulated"].stream_type = "raw" pl.recreate_pipeline() # pl["q1"]["blub"].show_pipeline() pl.show_pipeline() """ Explanation: Programmatic Modification of the Pipeline Some simple convenience functions allow the use to easily specify complex pipeline structures. End of explanation """ pl["q1 raw"].show_pipeline() """ Explanation: Note the name change. We refer to the pipeline by the stream type of the first element. End of explanation """ pl["q1 raw"].add(Display(label="Raw Plot")) pl["q1 raw"]["Demodulate"].add(Average(label="Demod Average")).add(Display(label="Demod Plot")) pl.show_pipeline() """ Explanation: End of explanation """ pl.session.commit() pl.save_as("custom") pl.ls() pl.load("simple") pl.show_pipeline() """ Explanation: As with the ChannelLibrary we can list save, list, and load versions of the filter pipeline. End of explanation """ pl.ls() """ Explanation: End of explanation """ # a basic pipeline that uses 'raw' data a the beginning of the data processing def create_standard_pipeline(): pl = PipelineManager() pl.create_default_pipeline(qubits=(cl['q2'],cl['q3'])) for ql in ['q2', 'q3']: qb = cl[ql] pl[ql].clear_pipeline() pl[ql].stream_type = "raw" pl[ql].create_default_pipeline(buffers=False) pl[ql].if_freq = qb.measure_chan.autodyne_freq pl[ql]["Demodulate"].frequency = qb.measure_chan.autodyne_freq pl[ql]["Demodulate"]["Integrate"].simple_kernel = True pl[ql]["Demodulate"]["Integrate"].box_car_start = 3e-7 pl[ql]["Demodulate"]["Integrate"].box_car_stop = 1.3e-6 #pl[ql]["Demodulate"]["Integrate"].add(Write(label="RR-Writer", groupname=ql+"-int")) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0)) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average") return pl # if you only want to save data integrated with the single-shot filter def create_integrated_pipeline(save_rr=False, plotting=True): pl = PipelineManager() pl.create_default_pipeline(qubits=(cl['q2'],cl['q3'])) for ql in ['q2', 'q3']: qb = cl[ql] pl[ql].clear_pipeline() pl[ql].stream_type = "integrated" pl[ql].create_default_pipeline(buffers=False) pl[ql].kernel = f"{ql.upper()}_SSF_kernel.txt" if save_rr: pl[ql].add(Write(label="RR-Writer", groupname=ql+"-rr")) if plotting: pl[ql]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0)) pl[ql]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average") return pl # create to single-shot fidelity pipelines for two qubits def create_fidelity_pipeline(): pl = PipelineManager() pl.create_default_pipeline(qubits=(cl['q2'],cl['q3'])) for ql in ['q2', 'q3']: qb = cl[ql] pl[ql].clear_pipeline() pl[ql].stream_type = "raw" pl[ql].create_default_pipeline(buffers=False) pl[ql].if_freq = qb.measure_chan.autodyne_freq pl[ql]["Demodulate"].frequency = qb.measure_chan.autodyne_freq pl[ql].add(FidelityKernel(save_kernel=True, logistic_regression=False, set_threshold=True, label=f"Q{ql[-1]}_SSF")) pl[ql]["Demodulate"]["Integrate"].simple_kernel = True pl[ql]["Demodulate"]["Integrate"].box_car_start = 3e-7 pl[ql]["Demodulate"]["Integrate"].box_car_stop = 1.3e-6 #pl[ql]["Demodulate"]["Integrate"].add(Write(label="RR-Writer", groupname=ql+"-int")) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0)) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average") return pl # optionally save the demoded data def create_RR_pipeline(plot=False, write_demods=False): pl = PipelineManager() pl.create_default_pipeline(qubits=(cl['q2'],cl['q3'])) for ql in ['q2', 'q3']: qb = cl[ql] pl[ql].clear_pipeline() pl[ql].stream_type = "raw" pl[ql].create_default_pipeline(buffers=False) pl[ql].if_freq = qb.measure_chan.autodyne_freq pl[ql]["Demodulate"].frequency = qb.measure_chan.autodyne_freq if write_demods: pl[ql]["Demodulate"].add(Write(label="demod-writer", groupname=ql+"-demod")) pl[ql]["Demodulate"]["Integrate"].simple_kernel = True pl[ql]["Demodulate"]["Integrate"].box_car_start = 3e-7 pl[ql]["Demodulate"]["Integrate"].box_car_stop = 1.3e-6 pl[ql]["Demodulate"]["Integrate"].add(Write(label="RR-Writer", groupname=ql+"-int")) if plot: pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0)) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average") return pl # save everything... using data buffers instead of writing to file def create_full_pipeline(buffers=True): pl = PipelineManager() pl.create_default_pipeline(qubits=(cl['q2'],cl['q3']), buffers=True) for ql in ['q2', 'q3']: qb = cl[ql] pl[ql].clear_pipeline() pl[ql].stream_type = "raw" pl[ql].create_default_pipeline(buffers=buffers) if buffers: pl[ql].add(Buffer(label="raw_buffer")) else: pl[ql].add(Write(label="raw-write", groupname=ql+"-raw")) pl[ql].if_freq = qb.measure_chan.autodyne_freq pl[ql]["Demodulate"].frequency = qb.measure_chan.autodyne_freq if buffers: pl[ql]["Demodulate"].add(Buffer(label="demod_buffer")) else: pl[ql]["Demodulate"].add(Write(label="demod_write", groupname=ql+"-demod")) pl[ql]["Demodulate"]["Integrate"].simple_kernel = True pl[ql]["Demodulate"]["Integrate"].box_car_start = 3e-7 pl[ql]["Demodulate"]["Integrate"].box_car_stop = 1.6e-6 if buffers: pl[ql]["Demodulate"]["Integrate"].add(Buffer(label="integrator_buffer")) else: pl[ql]["Demodulate"]["Integrate"].add(Write(label="int_write", groupname=ql+"-integrated")) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0)) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average") return pl # A more complicated pipeline with a correlator # These have to be coded more manually because the correlator needs all the correlated channels specified. # Note that for tomography you're going to want to save the data variance as well, though this can be calculated # after the fact if you save the raw shots (save_rr). def create_tomo_pipeline(save_rr=False, plotting=True): pl = PipelineManager() pl.create_default_pipeline(qubits=(cl['q2'],cl['q3'])) for ql in ['q2', 'q3']: qb = cl[ql] pl[ql].clear_pipeline() pl[ql].stream_type = "integrated" pl[ql].create_default_pipeline(buffers=False) pl[ql].kernel = f"{ql.upper()}_SSF_kernel.txt" pl[ql]["Average"].add(Write(label='var'), connector_out='final_variance') pl[ql]["Average"]["var"].groupname = ql + '-main' pl[ql]["Average"]["var"].datasetname = 'variance' if save_rr: pl[ql].add(Write(label="RR-Writer", groupname=ql+"-rr")) if plotting: pl[ql]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0)) pl[ql]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average") # needed for two-qubit state reconstruction pl.add_correlator(pl['q2'], pl['q3']) pl['q2']['Correlate'].add(Average(label='corr')) pl['q2']['Correlate']['Average'].add(Write(label='corr_write')) pl['q2']['Correlate']['Average'].add(Write(label='corr_var'), connector_out='final_variance') pl['q2']['Correlate']['Average']['corr_write'].groupname = 'correlate' pl['q2']['Correlate']['Average']['corr_var'].groupname = 'correlate' pl['q2']['Correlate']['Average']['corr_var'].datasetname = 'variance' return pl """ Explanation: Pipeline examples: Below are some examples of how more complicated pipelines can be constructed. Defining these as functions allows for quickly changing the structure of the data pipeline depending on the experiment being done. It also improves reproducibility and documents pipeline parameters. For example, to change the pipeline and check its construction, python pl = create_tomo_pipeline(save_rr=True) pl.show_pipeline() Hopefully the examples below will show you some of the more advanced things that can be done with the data pipelines in Auspex. End of explanation """
jnarhan/Breast_Cancer
src/img_processing/RemoveArtifacts.ipynb
mit
__version__ = '0.1.0' __status__ = 'Development' __date__ = '2017-March-21' __author__ = 'Jay Narhan' import os import cv2 import copy import numpy as np from matplotlib import pyplot as plt %matplotlib inline from IPython.display import clear_output import time """ Explanation: <h1>Removing Artifacts From Mammograms</h1> This notebook can be used to segment the artificial artifacts (number plates, orientation demarcation) from the actual breast tissue in mammogram images from the MIAS and DDSM data sets. Given the thresholding constraint of human input into the process, migrating this code to UI-interface for segmentation has been integrated into the notebook. End of explanation """ CURR_DIR = os.getcwd() # Point to the PNGs to be used: IMG_DIR = '/root/Docker-Shared/Data_Resized/MIAS/' SAVE_DIR = '/root/Docker-Shared/Data_Thresholded/MIAS/' filenames = [ filename for filename in os.listdir(IMG_DIR) if filename.endswith('.png')] images = [] os.chdir(IMG_DIR) for filename in filenames: img = cv2.imread(filename, cv2.IMREAD_GRAYSCALE) images.append({'filename': filename, 'image': img}) os.chdir(CURR_DIR) print "Number of images in memory: {}".format( len(images)) """ Explanation: <h2>Read in Some Images to Work With</h2> End of explanation """ def get_hists(image, b): hist, bins = np.histogram(img.flatten(), bins=b, range=[0,255]) cdf = hist.cumsum() cdf_normalized = cdf *hist.max()/ cdf.max() return [hist, cdf_normalized] def plot(img, img_hists): plt.figure(1) plt.subplot(121) plt.imshow(img, cmap='gray') plt.subplot(122) plt.plot(img_hists[1], color = 'b') plt.plot(img_hists[0], color = 'r') plt.xlim([0,256]) plt.legend(('cdf','histogram'), loc = 'upper left') plt.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=0.25, wspace=0.35) """ Explanation: <h2>Processing Images</h2> End of explanation """ clahe_images = [] clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)) for i, data in enumerate(images): clahe_images.append( {'filename': data['filename'], 'clahe_img': clahe.apply(data['image'])}) print 'Total number of CLAHE images: {}'.format(np.count_nonzero(clahe_images)) img = clahe_images[321]['clahe_img'] img_hists = get_hists( img, b=256) plot(img, img_hists) """ Explanation: <h3>Image Enhancement via Equalization</h3> Equalization attempts to correct for poor contrast in images. It is commonly used in medical imaging problems. The process looks to better distribute intensities of pixel values through the image. Contrast Limited Adaptive Histogram Equalization (CLAHE) Ordinary histogram equalization and adaptive equalization for mammograms have been noted as overly enhancing noise and sharp regions in such images. CLAHE has been found to be a more effective strategy to use. End of explanation """ # Pass img_list which should be clahe_images stored as list of dict: [ {filename.png, CLAHE_IMG} ] def threshold(img_list, factor = 1, select_files = []): images_t = [] def internal(data): thresholded = cv2.threshold(data['clahe_img'], np.median(data['clahe_img']) * factor, 255, cv2.THRESH_BINARY)[1] # just the binary image _, l, s, _ = cv2.connectedComponentsWithStats(thresholded) images_t.append( {'filename': data['filename'], 'clahe_img': data['clahe_img'], 'thresh_img': thresholded, 'factor': factor, 'labels':l, # labels: contiguous regions in mammogram, labelled 'count':s[:, -1] # count: count of pixels in each discrete object }) if not select_files: print 'Processing all files' for i, data in enumerate(img_list): internal(data) else: print 'Processing select files {}'.format(select_files) for i, data in enumerate(img_list): if data['filename'] in select_files: internal(data) return images_t """ Explanation: <h3>Thresholding</h3> This section aims is the crux of the objective at hand. It aims to segment the breast from the hardware artifacts within the images. Note that as an automated heuristic for thresholding, I used the median pixel value within the mammogram. There is a need to visual preview the resulting binary image in order to tweak the appropriate threshold value to use. The non-standard structure of mammograms makes it very hard to avoid a human eye in this process. Tweaking the threshold can be achieved by altering the argument value passed to the parameter "factor'. End of explanation """ def save(fn, img, location=SAVE_DIR): print 'Saving: {}'.format(location + fn) cv2.imwrite(location + fn, img) time.sleep(2) def mask(image, labels, region): labels = copy.deepcopy(labels) # create a full, unique copy of labels for row in range(image.shape[0]): for col in range(image.shape[1]): if labels[row, col] != region: labels[row, col] = 0 # mask the artifact else: labels[row, col] = 1 # retain the breast return labels def clean_art(images_thresh): revist = [] for i, data in enumerate(images_thresh): fn, c_img, t_img, factor = data['filename'], data['clahe_img'], data['thresh_img'], data['factor'] print 'Processing File: {}'.format(fn) plt.subplot(121) plt.imshow(c_img, cmap='gray') plt.title('Original') plt.subplot(122) plt.imshow(t_img, cmap='gray') plt.title('Binary Threshold') plt.show() input1 = raw_input("Do you want to save image as-is (Y/N or Q to quit): ").lower() if input1 == 'y': save(fn, c_img) elif (input1 != 'n') & (input1 != 'q'): print (".. appending to revist") revist.append( {'filename': fn, 'factor': factor, 'thresh_img': t_img}) time.sleep(2) elif input1 == 'q': print (".. appending to revist") revist.append( {'filename': fn, 'factor': factor, 'thresh_img': t_img}) time.sleep(2) break elif input1 == 'n': print "Need to threashold" input2 = raw_input("Do you want to clear image artifact (Y/N): ").lower() if (input2 == 'n') | (input2 != 'y'): print (".. appending to revist") revist.append( {'filename': fn, 'factor': factor, 'thresh_img': t_img}) time.sleep(2) elif input2 == 'y': top_regions = np.argpartition(data['count'], -2)[-2:] top_counts = data['count'][top_regions] print 'Top region pixel counts: {}'.format(top_counts) while(True): print 'Associated top regions labels: {}'.format(top_regions) input3 = raw_input("Keep pixels in which region (S to skip): ").lower() if input3 == 's': print (".. appending to revist") revist.append( {'filename': fn, 'factor': factor, 'thresh_img': t_img}) time.sleep(2) break elif input3.isdigit(): input3 = int(input3) if input3 in top_regions: print 'That is a valid region selection!' # mask the region my_mask = mask( t_img, data['labels'], region=input3) image = c_img * my_mask image = np.array(image, dtype = np.uint8) thresh_image = cv2.threshold(image, np.median(image), 255, cv2.THRESH_BINARY)[1] plt.subplot(121) plt.imshow(image, cmap='gray') plt.title('Post Processing') plt.subplot(122) plt.imshow(thresh_image, cmap='gray') plt.title('Post: Binary') plt.show() input4 = raw_input("Save post processed image (Y/N): ").lower() if input4 == 'y': save(fn, image) break else: print 'That is NOT a valid region' print 'Associated top regions labels: {}'.format(top_regions) else: print 'Enter a valid digit please.' print 'Associated top regions labels: {}'.format(top_regions) clear_output() return revist images_thresh = threshold(clahe_images) print len(images_thresh) remaining = clean_art(images_thresh) remaining_fn = [item['filename'] for item in remaining] """ Explanation: The general idea is to now count the pixels that belong to the breast tissue (in most cases, this will be the largest group of contiguous cells that have pixel value equal to 1 (white) in a binary mage). With that information in hand, we can retain the breast object by some multiplication of the original or CLAHE image as follows: The index to the 'count' component needs to be selected. This index corresponds to the region we wish to retain (breast). In many cases the value of this index will be 1 as a black background will predominate and be indexed by 0. However, sometimes the breast itself may make up most the image, in which case the value to send to the function will be 0. Typically the value used is between 0-1, but not always. Hence a human eye is required. I use np.argpartition() to get the indices to the two highest counts of discrete objects in the mammograms. This corresponds to regions of interest on which to threshold/mask. End of explanation """ images_thresh2 = threshold(clahe_images, factor=2, select_files=remaining_fn) remaining_fn_2 = clean_art(images_thresh2) """ Explanation: The threshold levels used generally were 1,2,3,4 and 7 times the median pixel value in the mammogram. If the artifact has not been cleared by the last attempt (i.e. increasing the threshold value to be 7 times the median pixel value), a final attempt was performed used significantly larger values for increasing the threshold levels (up to 20 times). Note that for some mammograms comprised of mostly fatty tissues (i.e. mammograms that dominated the image space and had large amounts of dark areas - fatty areas, in the image), a factor of 0.5 times the median pixel values was found to be more effective in segmenting the artifacts. End of explanation """
psas/composite-propellant-tank
Analysis/Calculations/.ipynb_checkpoints/Shrink Fit and Liner as Gasket Analysis-checkpoint.ipynb
gpl-3.0
# Import packages here: import math as m import numpy as np from IPython.display import Image import matplotlib.pyplot as plt # Properties of Materials (engineeringtoolbox.com, Cengel, Tian, DuPont, http://www.dtic.mil/dtic/tr/fulltext/u2/438718.pdf) # Coefficient of Thermal Expansion alphaAluminum = 0.0000131 # in/in/*F alphaPTFE = 0.0000478 # in/in/*F (over the range in question) # Elastic Moduli EAluminum = 10000000 # psi EAluminumCryo = 11000000 # psi EPTFE = 40000 # psi EPTFECryo = 500000 # psi # Yield Strength sigmaY_PTFE = 1300 # psi sigmaY_PTFECryo = 19000 # psi # Poisson's Ratio nuAluminum = 0.33 # in/in nuPTFE = 0.46 # in/in # Temperature Change Between Ambient and LN2 DeltaT = 389 # *F # Geometry of Parts # Main Ring Outer Radius roMain = 2.0000 # in # End Cap Inner Radius riCap = 1.3750 # in # Interfacial Radius r = 1.5000 # in # Liner Thickness t = 0.125 # in """ Explanation: Analysis of Sealing Potential of Liner Captured Between Aluminum Rings Via Shrink Fit. First Objective: Determine how much pressure will be placed on the liner at both room and cryogenic temperatures, based on the aluminum outer ring's inner diameter, an aluminum end cap's outer diameter, and the thickness of the PTFE liner. Compare this to the strength of the PTFE material at each of these temperatures. Second Objective: If the liner compression is at or exceeds the load necessary to seal the tank, determine the shrink fit pressure at the aluminum interface that can be achieved while maintaining this liner pressure. Assumptions: Since little theory has been found governing the sealing potential of radially loaded gaskets in the shape of an annulus, we take the approach that we can break down the elements of the ASME pressure vessel code equations and apply them to our geometry. The PTFE's strength is not sufficient to prevent the shrunken aluminum from returning to its original room temperature size, or deform either of the aluminum parts. In other words, the aluminum parts are rigid bodies moving relative to the PTFE as they expand or contract. The interface of the aluminum parts is at the inner diameter of the outer aluminum ring, regardless of intereference thickness. ($R = D_{i,ring}/2$) Stress concentrations in the liner that occur at the ends of the aluminum parts can be ignored. The internal pressure of the vessel will have a negligible effect on the portion of the liner that is compressed between shrink-fitted aluminum parts. End of explanation """ m = 2.00 P = 45 # psi yAmbient = 1200 # psi sigmaPTFEAmbient1 = yAmbient sigmaPTFEAmbient2 = m*P sigmaPTFEAmbient = sigmaPTFEAmbient1 """ Explanation: ASME Pressure Vessel Code Equations This code was developed for flat gaskets compressed between flanges. There are two equations, both give a value for the total bolt force needed to compress a gasket such that a seal is achieved at working pressure. Whichever equation gives a greater load for a specific design application is the one to be used. The following are the equations as they appear in Gaskets. Design, Selection, and Testing by Daniel E. Czernik: $$W_{m_2} = \pi bGy$$ $$W_{m_1} = \frac{\pi}{4}G^2P + 2\pi bGmP$$ Where $W_{m_1}$ and $W_{m_2}$ are total bolt loads in pounds, $b$ is the effective gasket contact seating width in inches, $G$ is the mean diameter of the gasket contact face in inches, $y$ is the gasket contact surface unit seating load in pounds per square inch, $m$ is a dimensionless gasket factor, and $P$ is the maximum working internal pressure of the vessel. For a typical round gasket, the effective gasket contact seating width is the difference between the outer and inner radii, or $r_o - r_i$. The mean diameter of such a gasket is simply the sum of the inner and outer radii, or $r_o + r_i$. When multiplied together, the product is the difference of the squares of the radii: $(r_o - r_i)(r_o + r_i) = (r_o^2 - r_i^2)$ This leads to the conslusion that $\pi bG = \pi (r_o^2 - r_i^2)$ is simply the area of one side of the gasket. This means that the $y$ is the only part of the first equation that we need for our analysis. We need at least enough pressure from the shrink fit to reach the seating load. For the second equation, the first term is simply the prodcut of the internal working pressure and the cross sectional area normal to the central axis of the tank. In other words, this is the bolt load needed to prevent the ends from losing contact with the rest of the tank. Since we are loading the liner in the radial direction, we can also remove this term from our analysis of the liner as a gasket. The second term is similar to the first equation, but has a factor of two to account for both surfaces of the gasket that provide paths for escaping fluid, and $y$ is replaced with $mP$. Values for $m$ and $y$ appear to be imperically determined for different materials of varied thicknesses, and are found in tables provided by the ASME or material manufacturer. Dupont's PTFE Handbook lists PTFE as having a gasket factor, $m$, of 2.00 and a seating load, $y$, of 1200 psi for a 1/8" gasket. Because there will only be a fluid path on one side of the liner, we assume the factor of two can be removed. It should be noted that $y$ is for PTFE at ambient temperature, and is said to have the benefit of an operating temperature range from cryogenic to 450 &deg;C, but an estimate for a seating load at cryogenic temperature may need to be developed. Since the pressure vessel code equations are for bolt load (force), and not stress in the gasket directly, both sides of each can be divided by area. This simplifies the equations to suit our needs. This is similar to the simplified procedure found on page 111 of Gaskets by Czernik. The modified equations we will then apply to our gasket geometry are as follows: $$\sigma_{PTFE, amb1} = y$$ $$\sigma_{PTFE, amb2} = mP$$ Since our operating pressure is only 45 psi, the first equation gives by far the greatest value. Estimation of Minimum Seating Stress for PTFE at Cryogenic Temperature I don't know what to do with this thing, or even if it is necessary. I may return to this another time Czernik gives ranges of values for minimum seating stress for some common metals and flat gaskets (i.e. not corrugated) in Table 3.4. These values are: Aluminum: 10000 - 200000 psi Copper: 15000 - 45000 psi Carbon Steel: 30000 - 70000 psi Stainless Steel: 35000 - 95000 psi These ranges account for variations in hardness or yield strength. Given some values found at Engineering Toolbox, we can find a value for what percent of yield strength these values are on average, and use that to determine a seating stress for PTFE at cryogenic temperature. The following is a list of nominal yield stresses from Engineering Toolbox: Aluminum: 13778 psi Copper: 10152 psi Carbon Steel: 36258 psi Stainless Steel: 72806 psi End of explanation """ deltaLinerAmbient = (sigmaPTFEAmbient/EPTFE)*t print('The change in liner thickness due to compression must be', "%.4f" % deltaLinerAmbient, 'in, in order to achieve a proper seal.') """ Explanation: Change in Liner Thickness Necessary to Achieve Seating Stress The radial stress due to the compression of the liner follows Hooke's Law: $$\sigma_{PTFE, amb} = \frac{\delta_{Liner, amb}}{t_{amb}}E_{PTFE, amb}$$ Where $t_{amb}$ is the liner thickness at ambient temperature before compression. solving this equation for the change in liner thickness yields: $$\delta_{Liner, amb} = \frac{\sigma_{PTFE, amb}}{E_{PTFE, amb}}t_{amb}$$ End of explanation """ rCryo = r - r*alphaAluminum*DeltaT Deltar = r - rCryo print('The maximum change in end cap radius equals: ', "%.4f" % DeltaR, 'in') print('This means that the maximum theoretical interference for the shrink fit is ', "%.4f" % DeltaR, 'in') """ Explanation: To know if this can be achieved, we must examine how much we can actually shrink the end cap, and whether or not that will allow enough clearance to fit the end cap into place before expansion. Maximum Thermal Contraction of End Cap Thermal expansion/contraction can be thought of as a scaling of the position vectors of all the points in a body of uniform composition relative to its centroid. The thermal change in radius of the end cap is thus given by the following linear thermal expansion relationship: $$r_{cyro} = r_{amb} - r_{amb}\alpha_{Al}\Delta T$$ The maximum change in radius is then simply the absolute value of the thermal change in radius from ambient to cryogenic temperature: $$\Delta r = r_{amb}\alpha_{Al}\Delta T$$ End of explanation """ deltaLinerAmbientMax = DeltaR - 0.00125 print('The achievable ambient temperature change in liner thickness due to shrink fitting is', "%.4f" % deltaLinerAmbientMax, 'in') """ Explanation: Clearance for the End Cap The above number does not account for some clearance to allow the end cap to slide into the liner. According to Engineer's Toolbox, a 3" diameter hole needs a minumum of 0.0025" of clearance to allow a shaft through with a free fit. This means that 0.00125" must be subtracted from the interference shrink fit to arrive at an achievable change in liner thickness due to shrink fitting. Thus: $$\delta_{Liner, amb, max} = \Delta r - 0.00125"$$ End of explanation """ tCryo = t - t*alphaPTFE*DeltaT print ('The liner thickness at cryogenic temperature is', "%.4f" % tCryo,'in') deltat = t*alphaPTFE*DeltaT print ('The change in liner thickness due to thermal contraction is', "%.4f" % deltat, 'in') tGap = t - deltaLinerAmbient print ('The ambient temperature liner gap width is', "%.4f" % tGap, 'in') deltaGap = tGap*alphaAluminum*DeltaT print ('The change in gap width is', "%.4f" % deltaGap, 'in') deltaLinerCryo = deltaLinerAmbient + deltaGap - deltat print ('The total change in liner thickness at cryogenic temperature is', "%.4f" % deltaLinerCryo, 'in') sigmaPTFECryo = (deltaLinerCryo/tCryo)*EPTFECryo print('Thus, the maximum achievable pressure exerted on the PTFE at cryogenic temperature is', "%.2f" % sigmaPTFECryo, 'psi') """ Explanation: The necessary change in liner thickness is less than the achievable change in liner thickness. Acording to the ASME pressure vessel code, the seal we need is achievable. However, the thermal contraction in the liner and the gap between aluminum rings that captures the liner, as well as the enormous change in yield strength and elastic modulus for PTFE when going to cryogenic temperatures may pose a problem. To be thorough, let's take a look at the liner stress under cryogenic conditions. Pressure Exerted on Liner at Cryogenic Temperature The liner will contract at cryogenic temperature, which will serve to reduce its stress due to the shrink fit, while the increase in elastic modulus at that temperature will increase its stress. This means that the liner will have a different stress state one once the tank is filled with LN2. The liner thickness at cryogenic temperature is: $$t_{cryo} = t_{amb} - t_{amb}\alpha_{PTFE}\Delta T$$ The thermal contraction of the liner thickness is given by: $$\delta_t = t_{amb}\alpha_{PTFE}\Delta T$$ The gap between aluminum rings will contract as well, leaving slightly less room for the liner at cryogenic temperature. The gap size at ambient temperature is specified to be the difference between the liner thickness and the change in liner thickness: $$t_{gap} = t_{amb} - \delta_{Liner, amb}$$ The change in gap width is: $$\delta_{gap} = t_{gap}\alpha_{Al}\Delta T$$ The change in liner thickness at cryogenic temperature is then given by: $$\delta_{Liner, cryo} = \delta_{Liner, amb} + \delta_{gap} - \delta_t$$ The Liner's radial stress at cryogenic temperature is given by: $$\sigma_{PTFE, cryo} = \frac{\delta_{Liner, cryo}}{t_{cryo}}E_{PTFE, cryo}$$ End of explanation """ h = 0.125 mu = 1.2 deltaInterference = ((2*P*r**4)/(mu*h*EAluminum))*((roMain**2 - riCap**2)/((roMain**2 - r**2)*(r**2 - riCap**2))) print('The intereference thickness needed to overcome the pressure force on the end caps is', "%.4f" % deltaInterference, 'in') """ Explanation: Although the load on the PTFE at cryogenic temperature is greater, the yield strength of the PTFE is much greater at 19000 psi. The ratio of load to yield strength at ambient temperature is then much higher than the ratio at cryogenic temperature. We do have a value to use for seating stress of cryogenic PTFE, so we must either trust the ASME code for our extreme temperature conditions, or imperically test for the seating stress, which we do not have time to do before this project is terminated. Can We Use the Excess Space Allowed By the Thermal Contraction of the End Cap to Hold It In Place, and Dispense With the Bolts? Right now the contact surface between the two aluminum parts is a cylidrical surface with a nominal diameter of 3 inches, and a height of 0.125 inches. Engineering Toolbox gives an aluminum-aluminum coefficient of friction of approximately 1.2 for clean, dry surfaces. This allows us to find the necessary pressure to secure the end cap in place using the shrink fit only and a factor of safety of 2. $$A_{contact} = 2\pi rh$$ The normal force caused by the shrink fit is the product of the shrink fit contact pressure and contact area: $$F_N = P_{shrink}A_{contact} = 2P_{shrink}\pi rh$$ The friction force is then the product of the normal force and the coefficient of friction: $$F_friction = 2P_{shrink}\mu \pi rh$$ The force that the friction must overcome (with a factor of safety of 2) is $$F_{cap} = 2PA_{cap} = 2P\pi r^2$$ Equating these forces and solving for the shrink fit pressure gives: $$P_{shrink} = \frac{Pr}{\mu h}$$ Shigley's Mechanical Engineering Design gives the shrink fit pressure as $$P_{shrink} = \frac{E_{Aluminum}\delta_{interference}}{2r^3}[\frac{(r_o^2 - r^2)(r^2 - r_i^2)}{r_o^2 - r_i^2}]$$ Equating these and solving for the intereference thickness, $\delta_{interference}$, yields the following equation: $$\delta_{interference} = \frac{2Pr^4}{\mu hE_{Aluminum}}[\frac{r_o^2 - r_i^2}{(r_o^2 - r^2)(r^2 - r_i^2)}]$$ End of explanation """
sdpython/ensae_teaching_cs
_doc/notebooks/td1a_home/2020_carte.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() %matplotlib inline """ Explanation: Tech - carte Faire une carte, c'est toujours compliqué. C'est simple jusqu'à ce qu'on s'aperçoive qu'on doit récupérer la description des zones administratives d'un pays, fournies parfois dans des coordonnées autres que longitude et latitude. Quelques modules utiles : cartopy : surcouche de matplotlib pour faire des dessins avec des coordonnées géographiques bokeh : pour tracer des cartes interactives pyproj : conversion entre systèmes de coordonnées shapely : manipuler des polygones géographiques (union, intersection, ...) pyshp : lire ou écrire des polygones géographiques geopandas : manipulation de dataframe avec des coordonnées géographiques Quelques notebooks intéressants : * Tracer une carte en Python avec bokeh * Tracer une carte en Python * Données carroyées et OpenStreetMap * Carte de France avec les départements * Carte de France avec les départements (2) End of explanation """ # https://www.data.gouv.fr/fr/datasets/donnees-hospitalieres-relatives-a-lepidemie-de-covid-19/ from pandas import read_csv url = "https://www.data.gouv.fr/fr/datasets/r/63352e38-d353-4b54-bfd1-f1b3ee1cabd7" covid = read_csv(url, sep=";") covid.tail() last_day = covid.loc[covid.index[-1], "jour"] last_day last_data = covid[covid.jour == last_day].groupby("dep").sum() last_data.shape last_data.describe() last_data.head() last_data.tail() """ Explanation: Exposé On télécharge des données hospitalières par départements. Données COVID End of explanation """ import geopandas # dernier lien de la page (format shapefiles) url = "https://www.data.gouv.fr/en/datasets/r/ed02b655-4307-4db4-b1ca-7939145dc20f" geo = geopandas.read_file(url) geo.tail() """ Explanation: Données départements On récupère ensuite la définition géographique des départements. End of explanation """ import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1, figsize=(5, 4)) geo.plot(ax=ax, color='white', edgecolor='black'); """ Explanation: Il faudrait aussi fusionner avec la population de chaque département. Ce sera pour une autre fois. Carte End of explanation """ codes = [_ for _ in set(geo.code_depart) if len(_) < 3] metropole = geo[geo.code_depart.isin(codes)] metropole.tail() fig, ax = plt.subplots(1, 1, figsize=(5, 4)) metropole.plot(ax=ax, color='white', edgecolor='black') ax.set_title("%s départements" % metropole.shape[0]); """ Explanation: On enlève tous les départements à trois chiffres. End of explanation """ merged = last_data.reset_index(drop=False).merge(metropole, left_on="dep", right_on="code_depart") merged.shape merged.tail() fig, ax = plt.subplots(1, 1, figsize=(5, 4)) merged.hist('rea', bins=20, ax=ax) ax.set_title("Distribution rea"); """ Explanation: Carte COVID End of explanation """ merged.sort_values('rea').tail() geomerged = geopandas.GeoDataFrame(merged) from mpl_toolkits.axes_grid1 import make_axes_locatable fig, ax = plt.subplots(1, 1) # ligne à ajouter pour avoir une légende ajustée à la taille du graphe cax = make_axes_locatable(ax).append_axes("right", size="5%", pad=0.1) geomerged.plot(column="rea", ax=ax, edgecolor='black', legend=True, cax=cax) ax.set_title("Réanimations pour les %d départements" % metropole.shape[0]); """ Explanation: Les régions les plus peuplées ont sans doute la plus grande capacité hospitalière. Il faudrait diviser par cette capacité pour avoir une carte qui ait un peu plus de sens. Comme l'idée est ici de simplement tracer la carte, on ne calculera pas de ratio. End of explanation """ capacite = covid.groupby(["jour", "dep"]).sum().groupby("dep").max() capacite.head() capa_merged = merged.merge(capacite, left_on="dep", right_on="dep") capa_merged["occupation"] = capa_merged["rea_x"] / capa_merged["rea_y"] capa_merged.head(n=2).T geocapa = geopandas.GeoDataFrame(capa_merged) fig, ax = plt.subplots(1, 1) # ligne à ajouter pour avoir une légende ajustée à la taille du graphe cax = make_axes_locatable(ax).append_axes("right", size="5%", pad=0.1) geocapa.plot(column="occupation", ax=ax, edgecolor='black', legend=True, cax=cax) ax.set_title("Occupations en réanimations pour les %d départements" % metropole.shape[0]); """ Explanation: La création de carte a toujours été plus ou moins compliqué. Les premiers notebooks que j'ai créés sur le sujet étaient beaucoup plus complexe. geopandas a simplifié les choses. Son développement a commencé entre 2013 et a bien évolué depuis. Et j'ai dû passer quelques heures à récupérer les contours des départements il y a cinq ans. On peut également récupérer la capacité maximale de chaque département en regardant sur le passé. End of explanation """
eecs445-f16/umich-eecs445-f16
handsOn_lecture12_bagging-boosting/handsOn12.ipynb
mit
import pandas as pd df = pd.read_csv('forest-cover-type.csv') df.head() """ Explanation: Recall: Boosting AdaBoost Algorithm An iterative algorithm for "ensembling" base learners Input: ${(\mathbf{x}i, y_i)}{i = 1}^n, T, \mathscr{F}$, base learner Initialize: $\mathbf{w}^{1} = (\frac{1}{n}, ..., \frac{1}{n})$ For $t = 1, ..., T$ $\mathbf{w}^{t} \rightarrow \boxed{\text{base learner finds} \quad \arg\min_{f \in \mathscr{F}} \sum \limits_{i = 1}^n w^t_i \mathbb{1}_{{f(\mathbf{x}_i) \neq y_i}}} \rightarrow f_t$ $\alpha_t = \frac{1}{2}\text{ln}\left(\frac{1 - r_t}{r_t}\right)$ where $r_t := e_{\mathbf{w}^t}(f_t) = \frac 1 n \sum \limits_{i = 1}^n w_i \mathbf{1}_{{f(\mathbf{x}_i) \neq y_i}} $ $w_i^{t + 1} = \frac{w_i^t \exp \left(- \alpha_ty_if_t(\mathbf{x}_i)\right)}{z_t}$ where $z_t$ normalizes. Output: $h_T(\mathbf{x}) = \text{sign}\left(\sum \limits_{t = 1}^T \alpha_t f_t(\mathbf{x})\right)$ Adaboost through Coordinate Descent It is often said that we can view Adaboost as "Coordinate Descent" on the exponential loss function. Question: Can you figure out what that means? Why is Adaboost doing coordinate descent? Hint 1: You need to figure out the objective function being minimized. For simplicity, assume there are a finite number of weak learners in $\mathscr{F}$ Hint 2: Recall that the exponential loss function is $\ell(h; (x,y)) = \exp(-y h(x))$ Hint 3: Let's write down the objective function being minimized. For simplicity, assume there are a finite number of weak learners in $\mathscr{F}$, say indexed by $j=1, \ldots, m$. Given a weight vector $\vec{\alpha}$, exponential loss over the data for this $\vec{\alpha}$ is: $$\text{Loss}(\vec{\alpha}) = \sum_{i=1}^n \exp \left( - y_i \left(\sum_{j=1}^m \alpha_j h_j(\vec{x}_i)\right)\right)$$ Coordinate descent chooses the smallest coordiante of $\nabla L(\vec{\alpha})$ and updates only this coordinate. Which coordinate is chosen? Bagging classifiers Let's explore how bagging (bootstrapped aggregation) works with classifiers to reduce variance, first by evaluating off the shelf tools and then by implementing our own basic bagging classifier. In both examples we'll be working with the dataset from the forest cover type prediction Kaggle competition, where the aim is to build a multi-class classifier to predict the forest cover type of a 30x30 meter plot of land based on cartographic features. See their notes about the dataset for more background. Exploring bagging Loading and splitting the dataset First, let's load the dataset: End of explanation """ X, y = df.iloc[:, 1:-1].values, df.iloc[:, -1].values from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.6, random_state=0) """ Explanation: Now we extract the X/y features and split them into a 60/40 train / test split so that we can see how well the training set performance generalizes to a heldout set. End of explanation """ from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import BaggingClassifier from sklearn.metrics import accuracy_score models = [ ('tree', DecisionTreeClassifier(random_state=0)), ('bagged tree', BaggingClassifier( DecisionTreeClassifier(random_state=0), random_state=0, n_estimators=10)) ] for label, model in models: model.fit(X_train, y_train) print("{} training|test accuracy: {:.2f} | {:.2f}".format( label, accuracy_score(y_train, model.predict(X_train)), accuracy_score(y_test, model.predict(X_test)))) """ Explanation: Evaluating train/test with and without bagging Now let's use an off the shelf decision tree classifier and compare its train/test performance with a bagged decision tree. End of explanation """ # your code goes here! """ Explanation: Note that both models were able to (nearly) fit the training set perfectly, and that bagging substantially improves test set performance (reduces variance). Hyperparameters Let's look at two hyperparametes associated with the bagging classifier: num_estimators controls how many classifiers make up the ensemble max_samples controls how many samples each classifier in the ensemble draws How many classifiers do we need to reduce variance? The default number of estimators is 10; explore the performance of the bagging classifier with a range values. How many classifiers do we need to reduce variance? What is the point of diminishing returns for this dataset? End of explanation """ # your code goes here! """ Explanation: How much of the dataset does each classifier need? By default, max_samples is set to 1.0, which means each classifier gets a number of samples equal to the size of the training set. How do you suppose bagging manages to reduce variance while still using the same number of samples? Explore how the performance varies as you range max_samples (note, you can use float values between 0.0 and 1.0 to choose a percentage): End of explanation """ from sklearn.tree import DecisionTreeClassifier from sklearn.base import BaseEstimator import numpy as np class McBaggingClassifier(BaseEstimator): def __init__(self, classifier_factory=DecisionTreeClassifier, num_classifiers=10): self.classifier_factory = classifier_factory self.num_classifiers = num_classifiers def fit(self, X, y): # create num_classifier classifiers calling classifier_factory, each # fitted with a different sample from X return self def predict(self, X): # get the prediction for each classifier, take a majority vote return np.ones(X.shape[0]) """ Explanation: Implementing Bagging We've shown the power of bagging, now let's appreciate its simplicity by implementing our own bagging classifier right here! End of explanation """ our_models = [ ('tree', DecisionTreeClassifier(random_state=0)), ('our bagged tree', McBaggingClassifier( classifier_factory=lambda: DecisionTreeClassifier(random_state=0) )) ] for label, model in our_models: model.fit(X_train, y_train) print("{} training|test accuracy: {:.2f} | {:.2f}".format( label, accuracy_score(y_train, model.predict(X_train)), accuracy_score(y_test, model.predict(X_test)))) """ Explanation: You should be able to achieve similar performance to scikit-learn's implementation: End of explanation """
dsacademybr/PythonFundamentos
Cap07/DesafioDSA/Missao3/missao3.ipynb
gpl-3.0
# Versão da Linguagem Python from platform import python_version print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version()) """ Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 7</font> Download: http://github.com/dsacademybr End of explanation """ class Grid(object): def find_path(self, matrix): # Implemente aqui sua solução """ Explanation: Missão: Implementar um algoritmo para mover um robô do canto superior esquerdo para o canto inferior direito de uma grade. Nível de Dificuldade: Médio Premissas Existem restrições de como o robô se move?      * O robô só pode se mover para a direita e para baixo Algumas células são inválidas (fora dos limites)?      * Sim Podemos supor que as células inicial e final são células válidas?      * Sim Isso é uma grade retangular? ou seja, a grade não é irregular?      * Sim Haverá sempre uma maneira válida para o robô chegar ao canto inferior direito?      * Não, retorno None Podemos assumir que as entradas são válidas?      * Não Podemos supor que isso se encaixa na memória?      * Sim Teste Cases <pre> o = célula válida x = célula inválida 0 1 2 3 0 o o o o 1 o x o o 2 o o x o 3 x o o o 4 o o x o 5 o o o x 6 o x o x 7 o x o o </pre> Caso geral Saída esperada = [(0, 0), (1, 0), (2, 0), (2, 1), (3, 1), (4, 1), (5, 1), (5, 2), (6, 2), (7, 2), (7, 3)] Nenhum caminho válido, por exemplo, linha 7, col 2 é inválido Nenhuma entrada Matriz vazia Solução End of explanation """ %%writefile missao3.py from nose.tools import assert_equal class TestGridPath(object): def test_grid_path(self): grid = Grid() assert_equal(grid.find_path(None), None) assert_equal(grid.find_path([[]]), None) max_rows = 8 max_cols = 4 matrix = [[1] * max_cols for _ in range(max_rows)] matrix[1][1] = 0 matrix[2][2] = 0 matrix[3][0] = 0 matrix[4][2] = 0 matrix[5][3] = 0 matrix[6][1] = 0 matrix[6][3] = 0 matrix[7][1] = 0 result = grid.find_path(matrix) expected = [(0, 0), (1, 0), (2, 0), (2, 1), (3, 1), (4, 1), (5, 1), (5, 2), (6, 2), (7, 2), (7, 3)] assert_equal(result, expected) matrix[7][2] = 0 result = grid.find_path(matrix) assert_equal(result, None) print('Sua solução foi executada com sucesso! Parabéns!') def main(): test = TestGridPath() test.test_grid_path() if __name__ == '__main__': main() %run -i missao3.py """ Explanation: Teste da Solução End of explanation """
c22n/ion-channel-ABC
docs/examples/human-atrial/nygren_isus_original.ipynb
gpl-3.0
import os, tempfile import logging import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns import numpy as np from ionchannelABC import theoretical_population_size from ionchannelABC import IonChannelDistance, EfficientMultivariateNormalTransition, IonChannelAcceptor from ionchannelABC.experiment import setup from ionchannelABC.visualization import plot_sim_results, plot_kde_matrix_custom import myokit from pyabc import Distribution, RV, History, ABCSMC from pyabc.epsilon import MedianEpsilon from pyabc.sampler import MulticoreEvalParallelSampler, SingleCoreSampler from pyabc.populationstrategy import ConstantPopulationSize """ Explanation: ABC calibration of $I_\text{Kur}$ in Nygren model to original dataset. Note the term $I_\text{sus}$ for sustained outward Potassium current is used throughout the notebook. End of explanation """ from experiments.isus_wang import wang_act_and_kin from experiments.isus_firek import (firek_inact) from experiments.isus_nygren import (nygren_inact_kin, nygren_rec) modelfile = 'models/nygren_isus.mmt' """ Explanation: Initial set-up Load experiments used for original dataset calibration: - Steady-state activation [Wang1993] - Activation time constant [Wang1993] - Steady-state inactivation [Firek1995] - Inactivation time constant [Nygren1998] - Recovery time constant [Nygren1998] End of explanation """ from ionchannelABC.visualization import plot_variables sns.set_context('talk') V = np.arange(-100, 40, 0.01) nyg_par_map = {'ri': 'isus.r_inf', 'si': 'isus.s_inf', 'rt': 'isus.tau_r', 'st': 'isus.tau_s'} f, ax = plot_variables(V, nyg_par_map, modelfile, figshape=(2,2)) """ Explanation: Plot steady-state and time constant functions of original model End of explanation """ observations, model, summary_statistics = setup(modelfile, wang_act_and_kin) assert len(observations)==len(summary_statistics(model({}))) g = plot_sim_results(modelfile, wang_act_and_kin) """ Explanation: Activation gate ($r$) calibration Combine model and experiments to produce: - observations dataframe - model function to run experiments and return traces - summary statistics function to accept traces End of explanation """ limits = {'isus.p1': (-100, 100), 'isus.p2': (1e-7, 50), 'log_isus.p3': (-5, 0), 'isus.p4': (-100, 100), 'isus.p5': (1e-7, 50), 'log_isus.p6': (-6, -1)} prior = Distribution(**{key: RV("uniform", a, b - a) for key, (a,b) in limits.items()}) # Test this works correctly with set-up functions assert len(observations) == len(summary_statistics(model(prior.rvs()))) """ Explanation: Set up prior ranges for each parameter in the model. See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space. End of explanation """ db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "nygren_isus_rgate_original.db")) logging.basicConfig() abc_logger = logging.getLogger('ABC') abc_logger.setLevel(logging.DEBUG) eps_logger = logging.getLogger('Epsilon') eps_logger.setLevel(logging.DEBUG) pop_size = theoretical_population_size(2, len(limits)) print("Theoretical minimum population size is {} particles".format(pop_size)) abc = ABCSMC(models=model, parameter_priors=prior, distance_function=IonChannelDistance( exp_id=list(observations.exp_id), variance=list(observations.variance), delta=0.05), population_size=ConstantPopulationSize(500), summary_statistics=summary_statistics, transitions=EfficientMultivariateNormalTransition(), eps=MedianEpsilon(initial_epsilon=100), sampler=MulticoreEvalParallelSampler(n_procs=16), acceptor=IonChannelAcceptor()) obs = observations.to_dict()['y'] obs = {str(k): v for k, v in obs.items()} abc_id = abc.new(db_path, obs) history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01) """ Explanation: Run ABC calibration End of explanation """ history = History('sqlite:///results/nygren/isus/original/nygren_isus_rgate_original.db') history.all_runs() df, w = history.get_distribution() df.describe() sns.set_context('poster') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 g = plot_sim_results(modelfile, wang_act_and_kin, df=df, w=w) plt.tight_layout() import pandas as pd N = 100 nyg_par_samples = df.sample(n=N, weights=w, replace=True) nyg_par_samples = nyg_par_samples.set_index([pd.Index(range(N))]) nyg_par_samples = nyg_par_samples.to_dict(orient='records') sns.set_context('talk') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 f, ax = plot_variables(V, nyg_par_map, 'models/nygren_isus.mmt', [nyg_par_samples], figshape=(2,2)) from ionchannelABC.visualization import plot_kde_matrix_custom import myokit import numpy as np m,_,_ = myokit.load(modelfile) originals = {} for name in limits.keys(): if name.startswith("log"): name_ = name[4:] else: name_ = name val = m.value(name_) if name.startswith("log"): val_ = np.log10(val) else: val_ = val originals[name] = val_ sns.set_context('paper') g = plot_kde_matrix_custom(df, w, limits=limits, refval=originals) plt.tight_layout() """ Explanation: Analysis of results End of explanation """ observations, model, summary_statistics = setup(modelfile, firek_inact, nygren_inact_kin, nygren_rec) assert len(observations)==len(summary_statistics(model({}))) sns.set_context('talk') g = plot_sim_results(modelfile, firek_inact, nygren_inact_kin, nygren_rec) limits = {'isus.q1': (-100, 100), 'isus.q2': (1e-7, 50), 'isus.q3': (0., 1.), 'log_isus.q4': (-4, 1), 'isus.q5': (-100, 100), 'isus.q6': (1e-7, 50), 'log_isus.q7': (-3, 2)} prior = Distribution(**{key: RV("uniform", a, b - a) for key, (a,b) in limits.items()}) # Test this works correctly with set-up functions assert len(observations) == len(summary_statistics(model(prior.rvs()))) """ Explanation: Inactivation gate ($s$) calibration End of explanation """ db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "nygren_isus_sgate_original.db")) logging.basicConfig() abc_logger = logging.getLogger('ABC') abc_logger.setLevel(logging.DEBUG) eps_logger = logging.getLogger('Epsilon') eps_logger.setLevel(logging.DEBUG) pop_size = theoretical_population_size(2, len(limits)) print("Theoretical minimum population size is {} particles".format(pop_size)) abc = ABCSMC(models=model, parameter_priors=prior, distance_function=IonChannelDistance( exp_id=list(observations.exp_id), variance=list(observations.variance), delta=0.05), population_size=ConstantPopulationSize(1000), summary_statistics=summary_statistics, transitions=EfficientMultivariateNormalTransition(), eps=MedianEpsilon(initial_epsilon=100), sampler=MulticoreEvalParallelSampler(n_procs=16), acceptor=IonChannelAcceptor()) obs = observations.to_dict()['y'] obs = {str(k): v for k, v in obs.items()} abc_id = abc.new(db_path, obs) history = abc.run(minimum_epsilon=0., max_nr_populations=100, min_acceptance_rate=0.01) """ Explanation: Run ABC calibration End of explanation """ history = History('sqlite:///results/nygren/isus/original/nygren_isus_sgate_original.db') df, w = history.get_distribution() df.describe() sns.set_context('poster') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 g = plot_sim_results(modelfile, firek_inact, nygren_inact_kin, nygren_rec, df=df, w=w) plt.tight_layout() N = 100 nyg_par_samples = df.sample(n=N, weights=w, replace=True) nyg_par_samples = nyg_par_samples.set_index([pd.Index(range(N))]) nyg_par_samples = nyg_par_samples.to_dict(orient='records') sns.set_context('talk') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 f, ax = plot_variables(V, nyg_par_map, 'models/nygren_isus.mmt', [nyg_par_samples], figshape=(2,2)) m,_,_ = myokit.load(modelfile) originals = {} for name in limits.keys(): if name.startswith("log"): name_ = name[4:] else: name_ = name val = m.value(name_) if name.startswith("log"): val_ = np.log10(val) else: val_ = val originals[name] = val_ sns.set_context('notebook') g = plot_kde_matrix_custom(df, w, limits=limits, refval=originals) """ Explanation: Database results analysis End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/feature_engineering/labs/sdk-feature-store.ipynb
apache-2.0
import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user" # Install necessary dependencies ! pip install {USER_FLAG} --upgrade google-cloud-aiplatform """ Explanation: Create a Vertex AI Feature Store Using the SDK Learning objectives In this notebook, you learn how to: Create feature store, entity type, and feature resources. Import your features into Vertex AI Feature Store. Serve online prediction requests using the imported features. Access imported features in offline jobs, such as training jobs. Overview This notebook introduces Vertex AI Feature Store, a managed cloud service for machine learning engineers and data scientists to store, serve, manage and share machine learning features at a large scale. This notebook assumes that you understand basic Google Cloud concepts such as Project, Storage and Vertex AI. Some machine learning knowledge is also helpful but not required. Dataset This notebook uses a movie recommendation dataset as an example throughout all the sessions. The task is to train a model to predict if a user is going to watch a movie and serve this model online. Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook Before you begin Install additional packages For this notebook, you need the Vertex SDK for Python. End of explanation """ # Automatically restart kernel after installs import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the kernel After you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from Kernel -> Restart Kernel, or running the following: End of explanation """ import os PROJECT_ID = "qwiklabs-gcp-01-5dbc4e7474d8" # Replace this with your Project ID # Get your Google Cloud project ID from gcloud if not os.getenv("IS_TESTING"): shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID) """ Explanation: Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Enable the Vertex AI API and Compute Engine API. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud. End of explanation """ if PROJECT_ID == "" or PROJECT_ID is None: PROJECT_ID = "qwiklabs-gcp-01-5dbc4e7474d8" # Replace this with your Project ID print("Project ID: ", PROJECT_ID) """ Explanation: Otherwise, set your project ID here. End of explanation """ # Import necessary libraries and define required constants from google.cloud import aiplatform from google.cloud.aiplatform import Feature, Featurestore REGION = "us-central1" # Replace this with your region if REGION == "[your-region]": REGION = "us-central1" FEATURESTORE_ID = "movie_prediction" INPUT_CSV_FILE = "gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/movie_prediction.csv" ONLINE_STORE_FIXED_NODE_COUNT = 1 aiplatform.init(project=PROJECT_ID, location=REGION) """ Explanation: Import libraries and define constants End of explanation """ # Create the Featurestore fs = # TODO 1a: Your code goes here( featurestore_id=FEATURESTORE_ID, online_store_fixed_node_count=ONLINE_STORE_FIXED_NODE_COUNT, project=PROJECT_ID, location=REGION, sync=True, ) """ Explanation: Terminology and Concept Featurestore Data model Vertex AI Feature Store organizes data with the following 3 important hierarchical concepts: Featurestore -&gt; Entity type -&gt; Feature * Featurestore: the place to store your features * Entity type: under a Featurestore, an Entity type describes an object to be modeled, real one or virtual one. * Feature: under an Entity type, a Feature describes an attribute of the Entity type In the movie prediction example, you will create a featurestore called movie_prediction. This store has 2 entity types: users and movies. The users entity type has the age, gender, and liked_genres features. The movies entity type has the titles, genres, and average rating features. Create Featurestore and Define Schemas Create Featurestore The method to create a Featurestore returns a long-running operation (LRO). An LRO starts an asynchronous job. LROs are returned for other API methods too, such as updating or deleting a featurestore. Running the code cell will create a featurestore and print the process log. End of explanation """ fs = Featurestore( featurestore_name=FEATURESTORE_ID, project=PROJECT_ID, location=REGION, ) print(fs.gca_resource) """ Explanation: Use the function call below to retrieve a Featurestore and check that it has been created. End of explanation """ # Create users entity type users_entity_type = fs.create_entity_type( entity_type_id="users", description="Users entity", ) # Create movies entity type movies_entity_type = fs.create_entity_type( entity_type_id="movies", description="Movies entity", ) """ Explanation: Create Entity Type Entity types can be created within the Featurestore class. Below, create the Users entity type and Movies entity type. A process log will be printed out. End of explanation """ users_entity_type = fs.get_entity_type(entity_type_id="users") movies_entity_type = fs.get_entity_type(entity_type_id="movies") print(users_entity_type) print(movies_entity_type) fs.list_entity_types() """ Explanation: To retrieve an entity type or check that it has been created use the get_entity_type or list_entity_types methods on the Featurestore object. End of explanation """ # to create features one at a time use users_feature_age = users_entity_type.create_feature( feature_id="age", value_type="INT64", description="User age", ) users_feature_gender = users_entity_type.create_feature( feature_id="gender", value_type="STRING", description="User gender", ) users_feature_liked_genres = users_entity_type.create_feature( feature_id="liked_genres", value_type="STRING_ARRAY", description="An array of genres this user liked", ) """ Explanation: Create Feature Features can be created within each entity type. Add defining features to the Users entity type and Movies entity type by using the create_feature method. End of explanation """ users_entity_type.list_features() movies_feature_configs = { "title": { "value_type": "STRING", "description": "The title of the movie", }, "genres": { "value_type": "STRING", "description": "The genre of the movie", }, "average_rating": { "value_type": "DOUBLE", "description": "The average rating for the movie, range is [1.0-5.0]", }, } # Create Features movie_features = # TODO 1b: Your code goes here( feature_configs=movies_feature_configs, ) """ Explanation: Use the list_features method to list all the features of a given entity type. End of explanation """ my_features = Feature.search(query="featurestore_id={}".format(FEATURESTORE_ID)) my_features """ Explanation: Search created features While the list_features method allows you to easily view all features of a single entity type, the search method in the Feature class searches across all featurestores and entity types in a given location (such as us-central1), and returns a list of features. This can help you discover features that were created by someone else. You can query based on feature properties including feature ID, entity type ID, and feature description. You can also limit results by filtering on a specific featurestore, feature value type, and/or labels. Some search examples are shown below. Search for all features within a featurestore with the code snippet below. End of explanation """ double_features = Feature.search( query="value_type=DOUBLE AND featurestore_id={}".format(FEATURESTORE_ID) ) double_features[0].gca_resource """ Explanation: Now, narrow down the search to features that are of type DOUBLE. End of explanation """ title_features = Feature.search( query="feature_id:title AND value_type=STRING AND featurestore_id={}".format( FEATURESTORE_ID ) ) title_features[0].gca_resource """ Explanation: Or, limit the search results to features with specific keywords in their ID and type. End of explanation """ # Specify the required details of users entity USERS_FEATURES_IDS = [feature.name for feature in users_entity_type.list_features()] USERS_FEATURE_TIME = "update_time" USERS_ENTITY_ID_FIELD = "user_id" USERS_GCS_SOURCE_URI = ( "gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/users.avro" ) GCS_SOURCE_TYPE = "avro" WORKER_COUNT = 1 print(USERS_FEATURES_IDS) # Import feature values for the users entity type users_entity_type.ingest_from_gcs( feature_ids=USERS_FEATURES_IDS, feature_time=USERS_FEATURE_TIME, entity_id_field=USERS_ENTITY_ID_FIELD, gcs_source_uris=USERS_GCS_SOURCE_URI, gcs_source_type=GCS_SOURCE_TYPE, worker_count=WORKER_COUNT, sync=False, ) """ Explanation: Import Feature Values You need to import feature values before you can use them for online/offline serving. In this step, you learn how to import feature values by ingesting the values from GCS (Google Cloud Storage). You can also import feature values from BigQuery or a Pandas dataframe. Source Data Format and Layout BigQuery table/Avro/CSV are supported as input data types. No matter what format you are using, each imported entity must have an ID; also, each entity can optionally have a timestamp, specifying when the feature values are generated. This notebook uses Avro as an input, located at this public bucket. The Avro schemas are as follows: For the Users entity: schema = { "type": "record", "name": "User", "fields": [ { "name":"user_id", "type":["null","string"] }, { "name":"age", "type":["null","long"] }, { "name":"gender", "type":["null","string"] }, { "name":"liked_genres", "type":{"type":"array","items":"string"} }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ] } For the Movies entity: schema = { "type": "record", "name": "Movie", "fields": [ { "name":"movie_id", "type":["null","string"] }, { "name":"average_rating", "type":["null","double"] }, { "name":"title", "type":["null","string"] }, { "name":"genres", "type":["null","string"] }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ] } Import feature values for Users entity type When importing, specify the following in your request: IDs of the features to import Data source URI Data source format: BigQuery Table/Avro/CSV End of explanation """ # Specify the required details of movies entity MOVIES_FEATURES_IDS = [feature.name for feature in movies_entity_type.list_features()] MOVIES_FEATURE_TIME = "update_time" MOVIES_ENTITY_ID_FIELD = "movie_id" MOVIES_GCS_SOURCE_URI = ( "gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/movies.avro" ) GCS_SOURCE_TYPE = "avro" WORKER_COUNT = 1 print(MOVIES_FEATURES_IDS) # Import feature values for the Movies entity type # TODO 2: Your code goes here( feature_ids=MOVIES_FEATURES_IDS, feature_time=MOVIES_FEATURE_TIME, entity_id_field=MOVIES_ENTITY_ID_FIELD, gcs_source_uris=MOVIES_GCS_SOURCE_URI, gcs_source_type=GCS_SOURCE_TYPE, worker_count=WORKER_COUNT, sync=False, ) """ Explanation: Import feature values for Movies entity type Similarly, import feature values for the Movies entity type into the featurestore. End of explanation """ # Read feature value of user entity by using entity ID users_entity_type.read(entity_ids="bob") # Read feature value of movies entity by specifying the entity type ID and features ID # TODO 3: Your code goes here """ Explanation: Get online predictions from your model Online serving lets you serve feature values for small batches of entities. It's designed for latency-sensitive service, such as online model prediction. For example, for a movie service, you might want to quickly show movies that the current user would most likely watch. Read one entity per request With the Python SDK, it is easy to read feature values of one entity. By default, the SDK will return the latest value of each feature, meaning the feature values with the most recent timestamp. To read feature values, specify the entity type ID and features to read. By default all the features of an entity type will be selected. The response will output and display the selected entity type ID and the selected feature values as a Pandas dataframe. End of explanation """ users_entity_type.read(entity_ids=["bob", "alice"]) movies_entity_type.read( entity_ids=["movie_02", "movie_03", "movie_04"], feature_ids=["title, genres"] ) """ Explanation: Read multiple entities per request To read feature values from multiple entities, specify the different entity type IDs. By default all the features of an entity type will be selected. Note that fetching only a small number of entities is recommended when using this SDK due to its latency-sensitive nature. End of explanation """ # Import necessary libraries from datetime import datetime from google.cloud import bigquery # Output dataset DESTINATION_DATA_SET = "movie_predictions" # @param {type:"string"} TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") DESTINATION_DATA_SET = "{prefix}_{timestamp}".format( prefix=DESTINATION_DATA_SET, timestamp=TIMESTAMP ) # Output table. Make sure that the table does NOT already exist; the BatchReadFeatureValues API cannot overwrite an existing table DESTINATION_TABLE_NAME = "training_data" # @param {type:"string"} DESTINATION_PATTERN = "bq://{project}.{dataset}.{table}" DESTINATION_TABLE_URI = DESTINATION_PATTERN.format( project=PROJECT_ID, dataset=DESTINATION_DATA_SET, table=DESTINATION_TABLE_NAME ) # Create dataset client = bigquery.Client(project=PROJECT_ID) dataset_id = "{}.{}".format(client.project, DESTINATION_DATA_SET) dataset = bigquery.Dataset(dataset_id) dataset.location = REGION dataset = client.create_dataset(dataset) print("Created dataset {}.{}".format(client.project, dataset.dataset_id)) """ Explanation: Now that you have learned how to fetch imported feature values for online serving, the next step is learning how to use imported feature values for offline use cases. Get batch predictions from your model Batch serving is used to fetch a large batch of feature values for high-throughput, and is typically used for training a model or batch prediction. In this section, you learn how to prepare for training examples by using the Featurestore's batch serve function. Use case The task is to prepare a training dataset to train a model, which predicts if a given user will watch a given movie. To achieve this, you need 2 sets of input: Features: you already imported into the featurestore. Labels: the ground-truth data recorded that user X has watched movie Y. To be more specific, the ground-truth observation is described in Table 1 and the desired training dataset is described in Table 2. Each row in Table 2 is a result of joining the imported feature values from Vertex AI Feature Store according to the entity IDs and timestamps in Table 1. In this example, the age, gender and liked_genres features from users and the titles, genres and average_rating features from movies are chosen to train the model. Note that only positive examples are shown in these 2 tables, i.e., you can imagine there is a label column whose values are all True. batch_serve_to_bq takes Table 1 as input, joins all required feature values from the featurestore, and returns Table 2 for training. <h4 align="center">Table 1. Ground-truth data</h4> users | movies | timestamp ----- | -------- | -------------------- alice | Cinema Paradiso | 2019-11-01T00:00:00Z bob | The Shining | 2019-11-15T18:09:43Z ... | ... | ... <h4 align="center">Table 2. Expected training data generated by using batch serve</h4> timestamp | entity_type_users | age | gender | liked_genres | entity_type_movies | title | genre | average_rating -------------------- | ----------------- | --------------- | ---------------- | -------------------- | - | -------- | --------- | ----- 2019-11-01T00:00:00Z | bob | 35 | M | [Action, Crime] | movie_02 | The Shining | Horror | 4.8 2019-11-01T00:00:00Z | alice | 55 | F | [Drama, Comedy] | movie_03 | Cinema Paradiso | Romance | 4.5 | ... | ... | ... | ... | ... | ... | ... | ... | ... Why timestamp? Note that there is a timestamp column in Table 2. This indicates the time when the ground-truth was observed. This is to avoid data inconsistency. For example, the 2nd row of Table 2 indicates that user alice watched movie Cinema Paradiso on 2019-11-01T00:00:00Z. The featurestore keeps feature values for all timestamps but fetches feature values only at the given timestamp during batch serving. On that day, Alice might have been 54 years old, but now Alice might be 56; featurestore returns age=54 as Alice's age, instead of age=56, because that is the value of the feature at the observation time. Similarly, other features might be time-variant as well, such as liked_genres. Create BigQuery dataset for output You need a BigQuery dataset to host the output data in us-central1. Input the name of the dataset you want to create and specify the name of the table you want to store the output created later. These will be used in the next section. Make sure that the table name does NOT already exist. End of explanation """ SERVING_FEATURE_IDS = { # to choose all the features use 'entity_type_id: ['*']' "users": ["age", "gender", "liked_genres"], "movies": ["title", "average_rating", "genres"], } # Batch read the feature values # TODO 4: Your code goes here( bq_destination_output_uri=DESTINATION_TABLE_URI, serving_feature_ids=SERVING_FEATURE_IDS, read_instances_uri=INPUT_CSV_FILE, ) """ Explanation: Batch Read Feature Values Assemble the request which specify the following info: Where is the label data, i.e., Table 1. Which features are read, i.e., the column names in Table 2. The output is stored in the BigQuery table. End of explanation """ # Delete Featurestore fs.delete(force=True) # Delete BigQuery dataset client = bigquery.Client(project=PROJECT_ID) client.delete_dataset( DESTINATION_DATA_SET, delete_contents=True, not_found_ok=True ) # Make an API request. print("Deleted dataset '{}'.".format(DESTINATION_DATA_SET)) """ Explanation: After the LRO finishes, you should be able to see the result in the BigQuery console, as a new table under the BigQuery dataset created earlier. Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. You can also keep the project but delete the featurestore and the BigQuery dataset by running the code below: End of explanation """
csc-training/python-introduction
notebooks/examples/2 - Control Structures.ipynb
mit
value = 4 value = value + 1 if value < 5: print("value is less than 5") elif value > 5: print("value is more than 5") else: print("value is precisely 5") # go ahead and experiment by changing the value """ Explanation: Conditional statements The most common conditional statement in Python is the if-elif-else statement: if variable &gt; 5: do_something() elif variable &gt; 0: do_something_else() else: give_up() Compared to languages like C, Java or Lisp, do you feel something is missing? Python is whitespace-aware and it uses the so-called off-side rule to annotate code blocks. This has several benefits * It's easy to read at a glance * levels of indentation are processed pre-attentively to conserve brain power for everything * It's easy to write without having to worry too much One corollary of the indentation is that you need to be very aware of when you're using the tabulator character and when you're using a space. Most Python programmers only use whitespace and configure their editor to output several spaces when tab is pressed. Note: the line before a deeper level of indentation ends in a colon ":". This syntax is part of beginning a new code block and surprisingly easy to forget. End of explanation """ list_ = [1] list_.pop() if not list_: print("list is None or empty") """ Explanation: There is no switch-case type of statement in Python. Note: When evaluating conditional statements the values 0, an empty string and an empty list all evaluate to False. This can be confusing as it is one of the few places where Python doesn't enforce strong typing. End of explanation """ list_ = [1, 2, 3, 4] while list_: # remember, an empty list evaluates as False for conditional purposes print(list_.pop()) # pop() removes the last entry from the list """ Explanation: While statement Python supports while statement familiar from many languages. It is not nearly as much used because of iterators (covered later). value = 5 while value &gt; 0: value = do_something(value) The following example shows how a list is used as the conditional. End of explanation """ synonyms = ["is dead", "has kicked the bucket", "is no more", "ceased to be"] for phrase in synonyms: print("This parrot " + phrase + ".") """ Explanation: Iterating Python has a for-loop statement that is similar to the foreach statement in a lot of other languages. It is possible to loop over any iterables, i.e. lists, sets, tuples, even dicts. End of explanation """ pairs = ( (1, 2), [3, 4], (5, 6), ) for x, y in pairs: print("A is " + str(x)) print("B is " + str(y)) """ Explanation: It is possible to unpack things in this stage if that is required. End of explanation """ airspeed_swallows = {"African": 20, "European": 30} for swallow in airspeed_swallows: print("The air speed of " + swallow + " swallows is "+ str(airspeed_swallows[swallow])) """ Explanation: In dictionaries the keys are iterated over by default. End of explanation """ for i in range(5): print(str(i)) # The function supports arbitary step lengths and going backwards for i in range(99, 90, -2): # parameters are from, to and step length in that order print(str(i) +" boxes of bottles of beer on the wall") """ Explanation: It is still possible to loop through numbers using the built-in range function that returns an iterable with numbers in sequence. End of explanation """ my_list = ["a", "b", "c", "d", "e"] for index, string in enumerate(my_list): print(string +" is the alphabet number "+ str(index)) """ Explanation: The function enumerate returns the values it's given with their number in the collection. End of explanation """ for i in range(20): if i % 7 == 6: # modulo operator break # print(i) for i in range(-5, 5, 1): if i == 0: print ("not dividing by 0") continue print("5/" + str(i) + " equals " + str(5/i)) """ Explanation: Breaking and continuing Sometimes it is necessary to stop the execution of a loop before it's time. For that there is the break keyword. At other times it is desired to end that particular step in the loop and immediately move to the next one. Both of the keywords could be substituted with complex if-else statements but a well-considered break or continue statement is more readable to the next programmer. End of explanation """ list_ = [value*3-1 for value in range(5)] list_ """ Explanation: List comprehension The act of modifying all the values in a list into a new list is so common in programming that there is a special syntax for it in python, the list comprehension. End of explanation """ list_2 = [value*3-1 for value in range(10) if value % 2 == 0] #only take even numbers list_2 """ Explanation: It is not necessary to use list comprehensions but they are mentioned so they can be understood if discovered in other programs. Part of the Zen of Python says There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. List comprehensions are the one and obvious way to do these kinds of operations so they are presented even though they may be considered "advanced" syntax. There is also possibility to add a simple test to the statement. End of explanation """
dietmarw/EK5312_ElectricalMachines
Chapman/Ch4-Problem_4-13.ipynb
unlicense
%pylab notebook %precision 1 """ Explanation: Excercises Electric Machinery Fundamentals Chapter 4 Problem 4-13 End of explanation """ Sbase = 25e6 # [VA] Vbase = 12.2e3 # [V] PF = 0.9 Ra = 0.6 # [Ohm] """ Explanation: Description A 25-MVA, 12.2-kV, 0.9-PF-lagging, three-phase, two-pole, Y-connected, 60-Hz synchronous generator was tested by the open-circuit test, and its air-gap voltage was extrapolated with the following results: Open-citcuit test | Field current [A] | Line voltage [kV] | Extrapolated air-gap voltage [kV] | |-------------------|-------------------|-----------------------------------| | 275 | 12.2 | 13.3 | | 320 | 13.0 | 15.4 | | 365 | 13.8 | 17.5 | | 380 | 14.1 | 18.3 | | 475 | 15.2 | 22.8 | | 570 | 16.0 | 27.4 | Short-circuit test | Field current [A] | Armature current [A] | |-------------------|----------------------| | 275 | 890 | | 320 | 1040 | | 365 | 1190 | | 380 | 1240 | | 475 | 1550 | | 570 | 1885 | The armature resistance is $0.6\,\Omega$ per phase. (a) Find the unsaturated synchronous reactance of this generator in ohms per phase and in per-unit. (b) Find the approximate saturated synchronous reactance $X_S$ at a field current of 380 A. Express the answer both in ohms per phase and in per-unit. (c) Find the approximate saturated synchronous reactance at a field current of 475 A. Express the answer both in ohms per phase and in per-unit. (d) Find the short-circuit ratio for this generator. (e) What is the internal generated voltage of this generator at rated conditions? (f) What field current is required to achieve rated voltage at rated load? End of explanation """ if_a = 380.0 # [A] """ Explanation: SOLUTION (a) The unsaturated synchronous reactance of this generator is the same at any field current, so we will look at it at a field current of 380 A. End of explanation """ Vag_a = 18.3e3 # [V] isc_a = 1240.0 # [A] """ Explanation: The extrapolated air-gap voltage at this point is 18.3 kV, and the short-circuit current is 1240 A End of explanation """ Vphi_a = Vag_a / sqrt(3) print('Vphi_a = {:.0f} V'.format(Vphi_a)) """ Explanation: Since this generator is Y-connected, the phase voltage is: End of explanation """ Ia_a = isc_a print('Ia_a = {:.0f} A'.format(Ia_a)) """ Explanation: and the armature current is: End of explanation """ Zsu_a = Vphi_a / Ia_a print('Zsu_a = {:.2f} Ω'.format(Zsu_a)) """ Explanation: Therefore, the unsaturated synchronous impedance $Z_{s} = \sqrt{R_a^2 + X_s^2}$ is: End of explanation """ Xsu_a = sqrt(Zsu_a**2 - Ra**2) print(''' Xsu_a = {:.2f} Ω ============== '''.format(Xsu_a)) """ Explanation: Which leads to the unsaturated syncronous reactance $X_{s} = \sqrt{Z_s^2 - R_a^2}$: End of explanation """ Vphi_base = Vbase/sqrt(3) Zbase = 3*Vphi_base**2 / Sbase print('Zbase = {:.2f} Ω'.format(Zbase)) """ Explanation: As you can see the impact of the armature resistance is negligible small. This is also the reason why $R_a$ is often simply ignored in calculations of the synchronous reactance. Especially for larger machines. The base impedance of this generator is: $$Z_\text{base} = \frac{3V^2_{\phi,\text{base}}}{S_\text{base}}$$ End of explanation """ xsu_a = Xsu_a / Zbase print(''' xsu_a = {:.2f} ============ '''.format(xsu_a)) """ Explanation: Therefore, the per-unit unsaturated synchronous reactance is: End of explanation """ If_b = 380.0 # [A] Vocc_b = 14.1e3 # [V] isc_b = 1240.0 # [A] """ Explanation: (b) The saturated synchronous reactance at a field current of 380 A can be found from the OCC and the SCC. The OCC voltage at $I_F = 380 A$ is 14.1 kV, and the short-circuit current is 1240 A. End of explanation """ Vphi_b = Vocc_b / sqrt(3) print('Vphi_b = {:.0f} V'.format(Vphi_b)) """ Explanation: Since this generator is Y-connected, the corresponding phase voltage is: End of explanation """ Ia_b = isc_b print('Ia_b = {:.0f} A'.format(Ia_b)) """ Explanation: and the armature current is: End of explanation """ Zs_b = Vphi_b / Ia_b Xs_b = sqrt(Zs_b**2 - Ra**2) print(''' Xs_b = {:.2f} Ω ============= '''.format(Xs_b)) """ Explanation: Therefore, the saturated synchronous reactance is: End of explanation """ xs_b = Xs_b / Zbase print(''' xs_b = {:.2f} =========== '''.format(xs_b)) """ Explanation: and the per-unit unsaturated synchronous reactance is: End of explanation """ If_c = 475.0 # [A] Vocc_c = 15.2e3 # [V] isc_c = 1550.0 # [A] """ Explanation: (c) The saturated synchronous reactance at a field current of 475 A can be found from the OCC and the SCC. The OCC voltage at $I_F = 475 A$ is 15.2 kV, and the short-circuit current is 1550 A. End of explanation """ Vphi_c = Vocc_c / sqrt(3) print('Vphi_c = {:.0f} V'.format(Vphi_c)) """ Explanation: Since this generator is Y-connected, the corresponding phase voltage is: End of explanation """ Ia_c = isc_c print('Ia_c = {:.0f} A'.format(Ia_c)) """ Explanation: and the armature current is: End of explanation """ Zs_c = Vphi_c / Ia_c Xs_c = sqrt(Zs_c**2 - Ra**2) print(''' Xs_c = {:.2f} Ω ============= '''.format(Xs_c)) """ Explanation: Therefore, the saturated synchronous reactance is: End of explanation """ xs_c = Xs_c / Zbase print(''' xs_c = {:.3f} ============ '''.format(xs_c)) """ Explanation: and the per-unit unsaturated synchronous reactance is: End of explanation """ If_d = 275.0 # [A] """ Explanation: (d) The rated voltage of this generator is 12.2 kV, which requires a field current of 275 A. End of explanation """ Il = Sbase / (sqrt(3) * Vbase) print('Il = {:.0f} A'.format(Il)) """ Explanation: The rated line and armature current of this generator is: End of explanation """ If_d_2 = 365.0 # [A] SCR = If_d / If_d_2 print(''' SCR = {:.2f} ========== '''.format(SCR)) """ Explanation: The field current required to produce such short-circuit current is about 365 A. Therefore, the short-circuit ratio of this generator is: End of explanation """ Xs_e = Xs_b If_e = If_b Ia_e = Il # rated current as calculated in part d """ Explanation: (e) The internal generated voltage of this generator at rated conditions would be calculated using the saturated synchronous reactance. End of explanation """ IA_e_angle = -arccos(PF) IA_e = Ia_e * (cos(IA_e_angle) + sin(IA_e_angle)*1j) print('IA_e = {:.0f} Ω ∠{:.2f}°'.format(*(abs(IA_e), IA_e_angle/ pi*180))) """ Explanation: Since the power factor is 0.9 lagging, the armature current is: End of explanation """ EA = Vphi_base + Ra*IA_e + Xs_e*IA_e*1j EA_angle = arctan(EA.imag / EA.real) print(''' EA = {:.0f} V ∠{:.1f}° =================== '''.format(*(abs(EA), EA_angle/pi*180))) """ Explanation: Therefore, $$\vec{E}A = \vec{V}\phi + R_A\vec{I}_A + jX_S\vec{I}_A$$ End of explanation """ abs(EA) """ Explanation: (f) If the internal generated voltage $E_A$ is End of explanation """ Vline_f = abs(EA)* sqrt(3) print('Vline_f = {:.0f} V'.format(Vline_f)) """ Explanation: Volts per phase, the corresponding line value would be: End of explanation """ If_f=(475-380)/(22.8e3-18.3e3)*(abs(EA)*sqrt(3)-18.3e3)+380 print(''' If_f = {:.0f} A ============ '''.format(If_f)) """ Explanation: This would require a field current of about (determined by usind the two-point form of $y - y_1 = \frac{y_2 - y_1}{x_2 - x_1} (x - x_1)$): End of explanation """
nathawkins/PHY451_FS_2017
Diode Laser Spectroscopy/20171003_morning/Interference with SAS no Dopple/.ipynb_checkpoints/Interferometer with SAS No Doppler Analysis-checkpoint.ipynb
gpl-3.0
get_peak_data(ch2, [0.025, 0.030]); get_peak_data(ch2, [0.030, 0.035]); get_peak_data(ch2, [0.0350,0.045]); get_peak_data(ch2, [0.049, 0.0517]); maximum_time_positions = [0.028124, 0.03266, 0.042744, 0.05052] maximum_voltage_positions = [0.738, 0.53, 0.716, 0.48] # Two subplots, unpack the axes array immediately f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) f.set_figheight(10) f.set_figwidth(15) ax1.set_title("Maximum Values on SAS, No Doppler") ax1.set_ylabel("Voltage (V)") ax1.set_xlabel("Time (s)") ax1.plot(maximum_time_positions, maximum_voltage_positions, 'rp') ax1.plot(time, ch2) ax2.set_title("Maximum Values on Interferometer Spikes") ax2.set_xlabel("Time (s)") ax2.plot(maximum_time_positions, maximum_voltage_positions, 'rp') ax2.plot(time, ch3, 'y-') """ Explanation: Isolating the Maximum Values End of explanation """ get_peak_data(ch3, [0.019, 0.020]); get_peak_data(ch3, [0.019, 0.022]); 0.020856 - 0.019276 """ Explanation: The distance between the interferometer peaks should be constant, so finding the time separating two maximum values should tell me how much time elapses between the two peaks, which we can use as a measure of separation of frequencies. End of explanation """ differences = [0] for i in range(1,len(maximum_time_positions)): differences.append(maximum_time_positions[i]-maximum_time_positions[i-1]) differences from prettytable import PrettyTable x = PrettyTable() x = PrettyTable() x.add_column("Time (s)", maximum_time_positions) x.add_column("Voltage (V)", maximum_voltage_positions) x.add_column("Difference (s)", [round(i, 5) for i in differences]) x.add_column("Number of Interferometer Distances Apart", [round(i, 5)/0.00158 for i in differences]) x.add_column("Separation of Features (MHz)", [round(i, 5)/0.00158 *379 for i in differences]) print(x) file = open('SummaryTable.txt', 'w') file.write(str(x)) file.close() """ Explanation: The time difference between peaks is 0.00158 seconds. End of explanation """
scikit-optimize/scikit-optimize.github.io
dev/notebooks/auto_examples/optimizer-with-different-base-estimator.ipynb
bsd-3-clause
print(__doc__) import numpy as np np.random.seed(1234) import matplotlib.pyplot as plt from skopt.plots import plot_gaussian_process from skopt import Optimizer """ Explanation: Use different base estimators for optimization Sigurd Carlen, September 2019. Reformatted by Holger Nahrstaedt 2020 .. currentmodule:: skopt To use different base_estimator or create a regressor with different parameters, we can create a regressor object and set it as kernel. This example uses :class:plots.plot_gaussian_process which is available since version 0.8. End of explanation """ noise_level = 0.1 # Our 1D toy problem, this is the function we are trying to # minimize def objective(x, noise_level=noise_level): return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2))\ + np.random.randn() * noise_level def objective_wo_noise(x): return objective(x, noise_level=0) opt_gp = Optimizer([(-2.0, 2.0)], base_estimator="GP", n_initial_points=5, acq_optimizer="sampling", random_state=42) def plot_optimizer(res, n_iter, max_iters=5): if n_iter == 0: show_legend = True else: show_legend = False ax = plt.subplot(max_iters, 2, 2 * n_iter + 1) # Plot GP(x) + contours ax = plot_gaussian_process(res, ax=ax, objective=objective_wo_noise, noise_level=noise_level, show_legend=show_legend, show_title=True, show_next_point=False, show_acq_func=False) ax.set_ylabel("") ax.set_xlabel("") if n_iter < max_iters - 1: ax.get_xaxis().set_ticklabels([]) # Plot EI(x) ax = plt.subplot(max_iters, 2, 2 * n_iter + 2) ax = plot_gaussian_process(res, ax=ax, noise_level=noise_level, show_legend=show_legend, show_title=False, show_next_point=True, show_acq_func=True, show_observations=False, show_mu=False) ax.set_ylabel("") ax.set_xlabel("") if n_iter < max_iters - 1: ax.get_xaxis().set_ticklabels([]) """ Explanation: Toy example Let assume the following noisy function $f$: End of explanation """ fig = plt.figure() fig.suptitle("Standard GP kernel") for i in range(10): next_x = opt_gp.ask() f_val = objective(next_x) res = opt_gp.tell(next_x, f_val) if i >= 5: plot_optimizer(res, n_iter=i-5, max_iters=5) plt.tight_layout(rect=[0, 0.03, 1, 0.95]) plt.plot() """ Explanation: GP kernel End of explanation """ from skopt.learning import GaussianProcessRegressor from skopt.learning.gaussian_process.kernels import ConstantKernel, Matern # Gaussian process with Matérn kernel as surrogate model from sklearn.gaussian_process.kernels import (RBF, Matern, RationalQuadratic, ExpSineSquared, DotProduct, ConstantKernel) kernels = [1.0 * RBF(length_scale=1.0, length_scale_bounds=(1e-1, 10.0)), 1.0 * RationalQuadratic(length_scale=1.0, alpha=0.1), 1.0 * ExpSineSquared(length_scale=1.0, periodicity=3.0, length_scale_bounds=(0.1, 10.0), periodicity_bounds=(1.0, 10.0)), ConstantKernel(0.1, (0.01, 10.0)) * (DotProduct(sigma_0=1.0, sigma_0_bounds=(0.1, 10.0)) ** 2), 1.0 * Matern(length_scale=1.0, length_scale_bounds=(1e-1, 10.0), nu=2.5)] for kernel in kernels: gpr = GaussianProcessRegressor(kernel=kernel, alpha=noise_level ** 2, normalize_y=True, noise="gaussian", n_restarts_optimizer=2 ) opt = Optimizer([(-2.0, 2.0)], base_estimator=gpr, n_initial_points=5, acq_optimizer="sampling", random_state=42) fig = plt.figure() fig.suptitle(repr(kernel)) for i in range(10): next_x = opt.ask() f_val = objective(next_x) res = opt.tell(next_x, f_val) if i >= 5: plot_optimizer(res, n_iter=i - 5, max_iters=5) plt.tight_layout(rect=[0, 0.03, 1, 0.95]) plt.show() """ Explanation: Test different kernels End of explanation """
tpin3694/tpin3694.github.io
python/pandas_make_new_columns_using_functions.ipynb
mit
# Import modules import pandas as pd # Example dataframe raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'], 'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'], 'name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze', 'Jacon', 'Ryaner', 'Sone', 'Sloan', 'Piger', 'Riani', 'Ali'], 'preTestScore': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3], 'postTestScore': [25, 94, 57, 62, 70, 25, 94, 57, 62, 70, 62, 70]} df = pd.DataFrame(raw_data, columns = ['regiment', 'company', 'name', 'preTestScore', 'postTestScore']) df """ Explanation: Title: Make New Columns Using Functions Slug: pandas_make_new_columns_using_functions Summary: Make New Columns Using Functions Date: 2016-05-01 12:00 Category: Python Tags: Data Wrangling Authors: Chris Albon End of explanation """ # Create a function that takes two inputs, pre and post def pre_post_difference(pre, post): # returns the difference between post and pre return post - pre # Create a variable that is the output of the function df['score_change'] = pre_post_difference(df['preTestScore'], df['postTestScore']) # View the dataframe df """ Explanation: Create one column as a function of two columns End of explanation """ # Create a function that takes one input, x def score_multipler_2x_and_3x(x): # returns two things, x multiplied by 2 and x multiplied by 3 return x*2, x*3 # Create two new variables that take the two outputs of the function df['post_score_x2'], df['post_score_x3'] = zip(*df['postTestScore'].map(score_multipler_2x_and_3x)) df """ Explanation: Create two columns as a function of one column End of explanation """
TheKingInYellow/PySeidon
PySeidon_tuto_4.ipynb
agpl-3.0
%pylab inline """ Explanation: PySeison - Tutorial 4: TideGauge class End of explanation """ from pyseidon import * """ Explanation: 1. PySeidon - TideGauge object initialisation Similarly to the "ADCP class" and the "Drifter class", the "TideGauge class" is a measurement-based object. 1.1. Package importation As any other library in Python, PySeidon has to be first imported before to be used. Here we will use an alternative import statement compared to the one previoulsy presented: End of explanation """ TideGauge? """ Explanation: Star here means all. Usually this form of statements would import the entire library. In the case of PySeidon, this statement will import the following object classes: FVCOM, Station, Validation, ADCP, Tidegauge and Drifter. Only the TideGauge class will be tackle in this tutorial. However note should note that the architecture design and functioning between each classes are very similar. 1.2. Object definition Python is by definition an object oriented language...and so is matlab. PySeidon is based on this notion of object, so let us define our first "Tidegauge" object. Exercise 1: - Unravel TideGauge documentation with Ipython shortcuts Answer: End of explanation """ tg = TideGauge('./data4tutorial/tidegauge_GP_01aug2013.mat') """ Explanation: According to the documentation, in order to define a TideGauge object, the only required input is a filename. This string input represents path to a file (e.g. testAdcp=TideGauge('./path_to_matlab_file/filename') and whose file must be a matlab file (i.e. .mat). Note that, at the current stage, the package only handle a certain type of file and data format. A template for the TideGauge file/data format is provided in the package under data4tutorial Exercise 2: - define a tide gauge object named tg from the following template: ./data4tutorial/tidegauge_GP_01aug2013.mat - Tip: adapt the file's path to your local machine. Answer: End of explanation """ harmo = tg.Utils.harmonics() recons = tg.Utils.reconstr(harmo) times = map(tg.Utils.mattime2datetime, tg.Variables.matlabTime[:]) ini_minus_recons = ini_minus_recons = tg.Variables.el - recons['h'] tg.Plots.plot_xy(times, ini_minus_recons, title='Residual tidal signal', xLabel='Time', yLabel='Elevation (m)') """ Explanation: 1.3. Object attributes, functions, methods & special methods The TideGauge object possesses 3 attributes and 3 methods. They would appear by typing tg. Tab for instance. An attribute is a quantity intrinsic to its object. A method is an intrinsic function which changes an attribute of its object. Contrarily a function will generate its own output: The Station attributes are: - History: history metadata that keeps track of the object changes - Data: gathers the raw/unchanged data of the specified .mat file - Variables*: gathers the hydrodynamics related data. Note that methods will generate new fields in this attribute The Station methods & functions are: - Utils: gathers utility methods and functions for use with 2D and 3D variables - Plots: gathers plotting methods for use with 2D and 3D variables - dump_profile_data: dumps profile data (x,y) in a *.csv file. 2. PySeidon - Hands-on (2 mins) Utils & Plots Exercise 3: - Perform a harmonic analysis of the elevation and print out the result - Reconstruction these elevation based on the harmonic results of the previous question - Convert matlabtime in datetime - Plot the elevation-minus-reconstructed-elevation time series. Answer: End of explanation """ tg.dump_profile_data(times, ini_minus_recons, title='Residual_tidal_signal', xLabel='Time', yLabel='Elevation (m)') """ Explanation: Save functions Exercise 5: - Dump depth-averaged velocity and time step data in a *.csv file Answer: End of explanation """
awsteiner/o2sclpy
doc/static/examples/buchdahl.ipynb
gpl-3.0
import o2sclpy import matplotlib.pyplot as plot import ctypes import numpy import sys plots=True if 'pytest' in sys.modules: plots=False """ Explanation: Buchdahl equation of state example for O$_2$sclpy See the O$_2$sclpy documentation at https://neutronstars.utk.edu/code/o2sclpy for more information. End of explanation """ link=o2sclpy.linker() link.link_o2scl() """ Explanation: Link the O$_2$scl library: End of explanation """ cu=link.o2scl_settings.get_convert_units() """ Explanation: Get a copy (a pointer to) the O$_2$scl unit conversion object: End of explanation """ b=o2sclpy.eos_tov_buchdahl(link) """ Explanation: Create the Buchdahl EOS object: End of explanation """ ts=o2sclpy.tov_solve(link) ts.set_eos(b); ts.fixed(1.4,1.0e-4) print('Exact radius is %7.6e, computed radius is %7.6e.' % (b.rad_from_gm(1.4),ts.rad)) print('Relative difference %7.6e.' % (abs(b.rad_from_gm(1.4)-ts.rad)/ts.rad)) """ Explanation: Create the TOV solve object, set the EOS and compute the M-R curve: End of explanation """ tov_table=ts.get_results() """ Explanation: Get the table for the TOV results: End of explanation """ beta=ts.mass*b.G_km_Msun/ts.rad """ Explanation: The compactness of a 1.4 solar mass NS: End of explanation """ radial_grid=[] rel_diff=[] for i in range(1,tov_table.get_nlines()): r=tov_table['r'][i] radial_grid.append(r) enc_mass=r*(1.0-1.0/b.exp2lam_from_r_gm(tov_table['r'][i], beta))/2.0/b.G_km_Msun enc_mass2=tov_table['gm'][i] rel_diff.append(abs(enc_mass-enc_mass2)/enc_mass) """ Explanation: Construct two lists, a radius grid and a list containing the relative difference of the exact and calculated enclosed gravitational mass: End of explanation """ if plots: pl=o2sclpy.plotter() """ Explanation: Initialize the plotting object: End of explanation """ pl.canvas() plot.plot(tov_table['r'][0:tov_table.get_nlines()], tov_table['gm'][0:tov_table.get_nlines()]) pl.xtitle('radius (km)') pl.ytitle('gravitational mass (Msun)') plot.show() """ Explanation: Plot the enclosed gravitational mass as a function of radius for a 1.4 solar mass neutron star: End of explanation """ pl.canvas_flag=False pl.canvas() plot.plot(radial_grid,rel_diff) pl.xtitle('radius (km)') pl.ytitle('rel. error in enclosed grav. mass') plot.show() """ Explanation: For the enclosed gravitational mass, plot the relative difference of the exact results and that computed from the tov_solve class: End of explanation """ def test_fun(): assert numpy.allclose(b.rad_from_gm(1.4),ts.rad,rtol=1.0e-9,atol=0) for i in range(0,len(rel_diff)): assert numpy.allclose(rel_diff[i],0.0,atol=5.0e-11) return """ Explanation: For testing using pytest: End of explanation """
atulsingh0/MachineLearning
BMLSwPython/01_GettingStarted_withPython.ipynb
gpl-3.0
start = timeit.timeit() X = range(1000) pySum = sum([n*n for n in X]) end = timeit.timeit() print("Total time taken: ", end-start) """ Explanation: Comparing the time End of explanation """ # reading the web data data = sp.genfromtxt("data/web_traffic.tsv", delimiter="\t") print(data[:3]) print(len(data)) """ Explanation: Learning Scipy End of explanation """ X = data[:, 0] y = data[:, 1] # checking for nan values print(sum(np.isnan(X))) print(sum(np.isnan(y))) """ Explanation: Preprocessing and Cleaning the data End of explanation """ X = X[~np.isnan(y)] y = y[~np.isnan(y)] # checking for nan values print(sum(np.isnan(X))) print(sum(np.isnan(y))) fig, ax = plt.subplots(figsize=(8,6)) ax.plot(X, y, '.b') ax.margins(0.2) plt.xticks([w*24*7 for w in range(0, 6)], ["week %d" %w for w in range(0, 6)]) ax.set_xlabel("Week") ax.set_ylabel("Hits / Week") ax.set_title("Web Traffic over weeks") """ Explanation: Filtering the nan data End of explanation """ # creating a error calc fuction def error(f, x, y): return np.sum((f(x) - y)**2) """ Explanation: Choosing the right model and learning algorithm End of explanation """ # sp's polyfit func do the same fp1, residuals, rank, sv, rcond = sp.polyfit(X, y, 1, full=True) print(fp1) print(residuals) # generating the one order function f1 = sp.poly1d(fp1) # checking error print("Error : ",error(f1, X, y)) x1 = np.array([-100, np.max(X)+100]) y1 = f1(x1) ax.plot(x1, y1, c='g', linewidth=2) ax.legend(["data", "d = %i" % f1.order], loc='best') fig """ Explanation: Linear 1-d model End of explanation """ # sp's polyfit func do the same fp2 = sp.polyfit(X, y, 2) print(fp2) # generating the 2 order function f2= sp.poly1d(fp2) # checking error print("Error : ",error(f2, X, y)) x1= np.linspace(-100, np.max(X)+100, 2000) y2= f2(x1) ax.plot(x1, y2, c='r', linewidth=2) ax.legend(["data", "d = %i" % f1.order, "d = %i" % f2.order], loc='best') fig """ Explanation: $$ f(x) = 2.59619213 * x + 989.02487106 $$ Polynomial 2-d End of explanation """ # we are going to divide the data on time so div = 3.5*7*24 X1 = X[X<=div] Y1 = y[X<=div] X2 = X[X>div] Y2 = y[X>div] # now plotting the both data fa = sp.poly1d(sp.polyfit(X1, Y1, 1)) fb = sp.poly1d(sp.polyfit(X2, Y2, 1)) fa_error = error(fa, X1, Y1) fb_error = error(fb, X2, Y2) print("Error inflection = %f" % (fa_error + fb_error)) x1 = np.linspace(-100, X1[-1]+100, 1000) x2 = np.linspace(X1[-10], X2[-1]+100, 1000) ya = fa(x1) yb = fb(x2) ax.plot(x1, ya, c='#800000', linewidth=2) # brown ax.plot(x2, yb, c='#FFA500', linewidth=2) # orange ax.grid(True) fig """ Explanation: $$ f(x) = 0.0105322215 * x^2 - 5.26545650 * x + 1974.6082 $$ What if we want to regress two response output instead of one, As we can see in the graph that there is a steep change in data between week 3 and 4, so let's draw two reponses line, one for the data between week0 and week3.5 and second for week3.5 to week5 End of explanation """ print(f2) print(f2 - 100000) # import from scipy.optimize import fsolve reached_max = fsolve(f2-100000, x0=800)/(7*24) print("100,000 hits/hour expected at week %f" % reached_max[0]) """ Explanation: Suppose we choose that function with degree 2 is best fit for our data and want to predict that if everything will go same then when we will hit the 100000 count ?? $$ 0 = f(x) - 100000 = 0.0105322215 * x^2 - 5.26545650 * x + 1974.6082 - 100000 $$ SciPy's optimize module has the function fsolve that achieves this, when providing an initial starting position with parameter x0. As every entry in our input data file corresponds to one hour, and we have 743 of them, we set the starting position to some value after that. Let fbt2 be the winning polynomial of degree 2. End of explanation """
freedomofpress/fingerprint-securedrop
notebooks/data_crawling_status.ipynb
agpl-3.0
import os import pandas as pd import sqlalchemy import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline plt.style.use('ggplot') with open(os.environ["PGPASS"], "rb") as f: content = f.readline().decode("utf-8").replace("\n", "").split(":") engine = sqlalchemy.create_engine("postgresql://{user}:{passwd}@{host}/{db}".format(user=content[3], passwd=content[4], host=content[0], db=content[2])) """ Explanation: Data SITREP redshiftzero, January 26, 2017 We've been collecting traces from crawling onion services, this notebook contains a brief SITREP of the status of the data collection. End of explanation """ df_examples = pd.read_sql("SELECT * FROM raw.frontpage_examples", con=engine) """ Explanation: Number of Examples End of explanation """ len(df_examples) """ Explanation: We have currently got a sample of: End of explanation """ daily = df_examples.set_index('t_scrape').groupby(pd.TimeGrouper(freq='D'))['exampleid'].count() ax = daily.plot(kind='bar', figsize=(24,6)) ax.set_xlabel('Date of scrape') ax.set_ylabel('Number of onion services scraped') ax.grid(False) ax.set_frame_on(False) # Prettify ticks, probably a smarter way to do this but sometimes I'm not very smart xtl=[item.get_text()[:10] for item in ax.get_xticklabels()] _=ax.set_xticklabels(xtl) """ Explanation: examples. Examples collected per day This was a bit stop and start as you can see: End of explanation """ result = engine.execute('SELECT MAX(t_sort) FROM raw.hs_history') for row in result: print(row) """ Explanation: Sanity check the sorter was last run recently: End of explanation """ df_crawls = pd.read_sql("SELECT * FROM raw.crawls", con=engine) """ Explanation: crawls End of explanation """ len(df_crawls) """ Explanation: There have been: End of explanation """ hs_query = """SELECT t1.t_scrape::date, count(distinct t1.hsid) FROM raw.frontpage_examples t1 group by 1 ORDER BY 1""" df_hs = pd.read_sql(hs_query, con=engine) df_hs.set_index('t_scrape', inplace=True) ax = df_hs.plot(kind='bar', figsize=(20,6)) ax.set_xlabel('Date of scrape') ax.set_ylabel('Number of UNIQUE onion services scraped') ax.grid(False) ax.set_frame_on(False) """ Explanation: crawls (the crawlers have clearly failed and restarted a crazy number of times). Number of unique onion services scraped End of explanation """ pd.read_sql("SELECT count(distinct hsid) FROM raw.frontpage_examples", con=engine) """ Explanation: The many days of very low numbers of unique onion services were when the crawlers were mostly getting traces to SecureDrops Number of HS total End of explanation """
mdeff/ntds_2017
projects/reports/movie_success/YouTube_analytics.ipynb
mit
%matplotlib inline import configparser import os import requests from tqdm import tqdm import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy import sparse, stats, spatial import scipy.sparse.linalg from sklearn import preprocessing, decomposition import librosa import IPython.display as ipd import json import requests import csv from pygsp import graphs, filters, plotting plotting.BACKEND = 'matplotlib' %pylab inline pylab.rcParams['figure.figsize'] = (10, 6); # Read the confidential credentials file credentials = configparser.ConfigParser() credentials.read('Credentials_Valentin.ini') # get YouTube API key api_key = credentials.get('YouTube', 'key2') """ Explanation: YouTube Trailer Views Find view counts of movie trailers End of explanation """ df = pd.read_csv('Saved_Datasets/NewFeaturesDataset.csv') """ Explanation: Import Dataset End of explanation """ df.drop(['budget','genres','overview','production_companies','release_date','revenue','success', 'ROI','director_name','actor_names','Metacritic'], axis=1, inplace = True) """ Explanation: Remove useless Columns End of explanation """ movie_name = "Kill Bill: Vol. 1" channel_id = 'UCTCjFFoX1un-j7ni4B6HJ3Q' website = "https://www.googleapis.com/youtube/v3/search?" request = website+"key="+api_key+"&channelId="+channel_id+"&part=snippet&maxResults=10&q=Trailer "+movie_name response = requests.get(request).json() response_list = list((item['snippet']['title'] for item in response['items'])) print(response_list) def get_views(movie_name,channel_id): #get video id (only takes the first 10 results) website = "https://www.googleapis.com/youtube/v3/search?" request = website+"key="+api_key+"&channelId="+channel_id+"&part=snippet,id&maxResults=10&q=Trailer "+movie_name response = requests.get(request).json() #if we have no error... try: #verify if the movie name is in the title of the video video_id = 'none' for test in response['items']: if movie_name in test['snippet']['title']: video = test['snippet']['title'] video_id = test['id']['videoId'] break #if we have an error (or no result) except: print("Err Except: '"+movie_name+"'") return 'Error' if (video_id == 'none'): print("Err No Res: '"+movie_name+"'") return 'Error' #get video view count website = "https://www.googleapis.com/youtube/v3/videos?" request = website+"key="+api_key+"&part=statistics&maxResults=1&id="+video_id response = requests.get(request).json() try: view_count = int(response['items'][0]['statistics']['viewCount']) return list([video, view_count]) except: print("Err Except: '"+movie_name+"'") return 'Error' #get_views(df.iloc[46]['title'],'UCi8e0iOVk1fEOogdfu4YgfA') df['TrailerViews1'] = df.apply (lambda row: get_views(row['title'],'UCi8e0iOVk1fEOogdfu4YgfA'),axis=1) df['TrailerViews2'] = df.apply (lambda row: get_views(row['title'],'UCTCjFFoX1un-j7ni4B6HJ3Q'),axis=1) df['TrailerViews3'] = df.apply (lambda row: get_views(row['title'],'UCeR7qki1ikig6q4HwvP6dTg'),axis=1) df['TrailerViews4'] = df.apply (lambda row: get_views(row['title'],'UCn9PskIfVA5RH6bC9hbtzWg'),axis=1) df['TrailerViews5'] = df.apply (lambda row: get_views(row['title'],'UCGLSYrwuo44K0U6PQmYtN3Q'),axis=1) df['TrailerViews6'] = df.apply (lambda row: get_views(row['title'],'UCzNWVDZQ55bjq8uILZ7_wyQ'),axis=1) df['TrailerViews7'] = df.apply (lambda row: get_views(row['title'],'UCRX7UEyE8kp35mPrgC2sosA'),axis=1) df['TrailerViews8'] = df.apply (lambda row: get_views(row['title'],'UCOP-gP2WgKUKfFBMnkR3iaA'),axis=1) test1_fail = df.loc[df['TrailerViews1']=='Error'] test2_fail = df.loc[df['TrailerViews2']=='Error'] test3_fail = df.loc[df['TrailerViews3']=='Error'] test4_fail = df.loc[df['TrailerViews4']=='Error'] test5_fail = df.loc[df['TrailerViews5']=='Error'] test6_fail = df.loc[df['TrailerViews6']=='Error'] test7_fail = df.loc[df['TrailerViews7']=='Error'] test8_fail = df.loc[df['TrailerViews8']=='Error'] test_all_fail = df[(df['TrailerViews1']=='Error') & (df['TrailerViews2']=='Error') & (df['TrailerViews3']=='Error') & (df['TrailerViews4']=='Error') & (df['TrailerViews5']=='Error') & (df['TrailerViews6']=='Error') & (df['TrailerViews7']=='Error') & (df['TrailerViews8']=='Error')] print("Channel 1: Nb. errors: "+str(len(test1_fail))+" ("+str(len(test1_fail)/len(df)*100)[:4]+"%)") print("Channel 2: Nb. errors: "+str(len(test2_fail))+" ("+str(len(test2_fail)/len(df)*100)[:4]+"%)") print("Channel 3: Nb. errors: "+str(len(test3_fail))+" ("+str(len(test3_fail)/len(df)*100)[:4]+"%)") print("Channel 4: Nb. errors: "+str(len(test4_fail))+" ("+str(len(test4_fail)/len(df)*100)[:4]+"%)") print("Channel 5: Nb. errors: "+str(len(test5_fail))+" ("+str(len(test5_fail)/len(df)*100)[:4]+"%)") print("Channel 6: Nb. errors: "+str(len(test6_fail))+" ("+str(len(test6_fail)/len(df)*100)[:4]+"%)") print("Channel 7: Nb. errors: "+str(len(test7_fail))+" ("+str(len(test7_fail)/len(df)*100)[:4]+"%)") print("Channel 8: Nb. errors: "+str(len(test8_fail))+" ("+str(len(test8_fail)/len(df)*100)[:4]+"%)") print("All Channels: Nb. Movies missing: "+str(len(test_all_fail))+" ("+str(len(test_all_fail)/len(df)*100)[:4]+"%)") #df.to_csv('Saved_Datasets/test_youtube2.csv', encoding='utf-8', index=False) """ Explanation: YouTube API Test For Youtueb API, see: https://developers.google.com/youtube/v3/docs/ For channel id, see: https://developers.google.com/youtube/v3/docs/channels/list Channel 1 : Movieclips Trailers channel id: UCi8e0iOVk1fEOogdfu4YgfA Channel 2 : Movieclips Trailer Vault channel id: UCTCjFFoX1un-j7ni4B6HJ3Q Channel 3 : TrailersPlaygroundHD channel id: UCeR7qki1ikig6q4HwvP6dTg Channel 4 : TheCultBox channel id: UCn9PskIfVA5RH6bC9hbtzWg Channel 5 : Forever Cinematic Trailers channel id: UCGLSYrwuo44K0U6PQmYtN3Q Channel 6 : FRESH Movie Trailers channel id: UCzNWVDZQ55bjq8uILZ7_wyQ Channel 7 : JoBlo Movie Trailers channel id: UCRX7UEyE8kp35mPrgC2sosA Channel 8 : FilmIsNow Movie Trailers channel id: UCOP-gP2WgKUKfFBMnkR3iaA End of explanation """ df = pd.read_csv('Saved_Datasets/test_youtube2.csv') """ Explanation: Transform views End of explanation """ df.drop(['TrailerViews4','TrailerViews5','TrailerViews6'], axis=1, inplace = True) test_all_fail = df[(df['TrailerViews1']=='Error') & (df['TrailerViews2']=='Error') & (df['TrailerViews3']=='Error') & (df['TrailerViews7']=='Error') & (df['TrailerViews8']=='Error')] print("All Channels: Nb. Movies missing: "+str(len(test_all_fail))+" ("+str(len(test_all_fail)/len(df)*100)[:4]+"%)") """ Explanation: Drop channels 4, 5 and 6 because they have too many errors and view counts are too low End of explanation """ def parse(video): if video != 'Error': video = video.replace('"',"'") video = list(video[2:-1].split("', ")) title = video[0] views = int(video[1]) return list([title,views]) else: return video df['TrailerViews1'] = df.apply(lambda row: parse(row['TrailerViews1']),axis=1) df['TrailerViews2'] = df.apply(lambda row: parse(row['TrailerViews2']),axis=1) df['TrailerViews3'] = df.apply(lambda row: parse(row['TrailerViews3']),axis=1) df['TrailerViews7'] = df.apply(lambda row: parse(row['TrailerViews7']),axis=1) df['TrailerViews8'] = df.apply(lambda row: parse(row['TrailerViews8']),axis=1) """ Explanation: Need to transform str into lists End of explanation """ def mean_views(video_list): sumx = 0 i = 0 for video in video_list: if video != 'Error': sumx = sumx+video[1] i = i+1 return int(sumx/i) def find_max(video_list): views_list = [] for video in video_list: if video != 'Error': views_list.append(video[1]) quartile = int(np.percentile(views_list,75)) return quartile meanviews = {} meanviews['channel1'] = mean_views(df['TrailerViews1']) meanviews['channel2'] = mean_views(df['TrailerViews2']) meanviews['channel3'] = mean_views(df['TrailerViews3']) meanviews['channel7'] = mean_views(df['TrailerViews7']) meanviews['channel8'] = mean_views(df['TrailerViews8']) print(totalviews) quartile = {} quartile['channel1'] = find_max(df['TrailerViews1']) quartile['channel2'] = find_max(df['TrailerViews2']) quartile['channel3'] = find_max(df['TrailerViews3']) quartile['channel7'] = find_max(df['TrailerViews7']) quartile['channel8'] = find_max(df['TrailerViews8']) print(quartile) """ Explanation: Find mean views of each channel (only for videos in the dataset) End of explanation """ def norm_views(video,mean): if video != 'Error': if(video[1] != 0): normed_views = video[1]/mean else: return 0 return np.round(normed_views,3) else: return video """ Explanation: Divide each number of views. This will allow us to compare view numbers event if the videos are not from the same channel. End of explanation """ df['TrailerViews1'] = df.apply(lambda row: norm_views(row['TrailerViews1'],9125570296),axis=1) df['TrailerViews2'] = df.apply(lambda row: norm_views(row['TrailerViews2'],560865882),axis=1) df['TrailerViews3'] = df.apply(lambda row: norm_views(row['TrailerViews3'],49297895),axis=1) df['TrailerViews7'] = df.apply(lambda row: norm_views(row['TrailerViews7'],2217209012),axis=1) df['TrailerViews8'] = df.apply(lambda row: norm_views(row['TrailerViews8'],544428358),axis=1) """ Explanation: Try to divide by the total number of views on the channel $\rightarrow$ results too small End of explanation """ df['TrailerViews1'] = df.apply(lambda row: norm_views(row['TrailerViews1'],meanviews['channel1']),axis=1) df['TrailerViews2'] = df.apply(lambda row: norm_views(row['TrailerViews2'],meanviews['channel2']),axis=1) df['TrailerViews3'] = df.apply(lambda row: norm_views(row['TrailerViews3'],meanviews['channel3']),axis=1) df['TrailerViews7'] = df.apply(lambda row: norm_views(row['TrailerViews7'],meanviews['channel7']),axis=1) df['TrailerViews8'] = df.apply(lambda row: norm_views(row['TrailerViews8'],meanviews['channel8']),axis=1) """ Explanation: Try to divide by the mean of views of the dataset for each channel $\rightarrow$ results not comparable End of explanation """ def sat_views(video, saturation): if video != 'Error': if video[1] > saturation: video[1] = saturation return video df['TrailerViews1'] = df.apply(lambda row: sat_views(row['TrailerViews1'],quartile['channel1']),axis=1) df['TrailerViews2'] = df.apply(lambda row: sat_views(row['TrailerViews2'],quartile['channel2']),axis=1) df['TrailerViews3'] = df.apply(lambda row: sat_views(row['TrailerViews3'],quartile['channel3']),axis=1) df['TrailerViews7'] = df.apply(lambda row: sat_views(row['TrailerViews7'],quartile['channel7']),axis=1) df['TrailerViews8'] = df.apply(lambda row: sat_views(row['TrailerViews8'],quartile['channel8']),axis=1) df['TrailerViews1'] = df.apply(lambda row: norm_views(row['TrailerViews1'],quartile['channel1']),axis=1) df['TrailerViews2'] = df.apply(lambda row: norm_views(row['TrailerViews2'],quartile['channel2']),axis=1) df['TrailerViews3'] = df.apply(lambda row: norm_views(row['TrailerViews3'],quartile['channel3']),axis=1) df['TrailerViews7'] = df.apply(lambda row: norm_views(row['TrailerViews7'],quartile['channel7']),axis=1) df['TrailerViews8'] = df.apply(lambda row: norm_views(row['TrailerViews8'],quartile['channel8']),axis=1) """ Explanation: Saturate views to remove 25% highest and divide by the max (which is the 75% quartile) $\rightarrow$ best solution found End of explanation """ df['TrailerViews1'] = df.apply(lambda row: norm_views(row['TrailerViews1'],1),axis=1) df['TrailerViews2'] = df.apply(lambda row: norm_views(row['TrailerViews2'],1),axis=1) df['TrailerViews3'] = df.apply(lambda row: norm_views(row['TrailerViews3'],1),axis=1) df['TrailerViews7'] = df.apply(lambda row: norm_views(row['TrailerViews7'],1),axis=1) df['TrailerViews8'] = df.apply(lambda row: norm_views(row['TrailerViews8'],1),axis=1) df.head(10) #df.to_csv('Saved_Datasets/test_youtube2_normed.csv', encoding='utf-8', index=False) """ Explanation: Without norming End of explanation """ df = pd.read_csv('Saved_Datasets/test_youtube2_normed.csv') def mean_channels(views): i = 0 tot = 0 for view in views: if view != 'Error': tot = tot + float(view) i = i+1 if i != 0: return np.round(tot/i,3) else: return 'Error' df['TrailerViewsMean'] = df.apply(lambda row: mean_channels([row['TrailerViews1'],row['TrailerViews2'],row['TrailerViews3'], row['TrailerViews7'],row['TrailerViews8']]),axis=1) #df.to_csv('Saved_Datasets/test_youtube2_normed.csv', encoding='utf-8', index=False) #df.drop(['TrailerViews1','TrailerViews2','TrailerViews3','TrailerViews7','TrailerViews8'], axis=1, inplace = True) #df = df.rename(columns={'TrailerViewsMean': 'YouTube_Mean'}) #df.to_csv('Saved_Datasets/YouTube_views.csv', encoding='utf-8', index=False) """ Explanation: Make Mean of all Channels End of explanation """ x = 'TrailerViewsMean' df = pd.read_csv('Saved_Datasets/test_youtube2_normed.csv') df3 = df.drop(df[df[x] == 'Error'].index) df3[x] = df3[x].astype(float) len(df3) print(min(df3[x])) print(np.mean(df3[x])) print(max(df3[x])) plt.hist(df3[x],bins='auto'); """ Explanation: Keeping only Channel x End of explanation """ views = np.array(df3[x]) w = np.zeros((len(df3),len(df3))) for i in range(0,len(df3)): for j in range(i,len(df3)): if (i == j): w[i,j] = 0 continue else: w[i,j] = w[j,i] = 1 - abs(views[i]-views[j]) fig, axes = plt.subplots(1, 2) axes[0].hist(w.reshape(-1), bins=50); axes[1].spy(w); print('The mean value is: {}'.format(w.mean())) print('The max value is: {}'.format(w.max())) print('The min value is: {}'.format(w.min())) W = pd.DataFrame(w) W.to_csv('Saved_Datasets/NormalizedTrailerW.csv', encoding='utf-8', index=False) G = graphs.Graph(W) G.compute_laplacian('normalized') G.compute_fourier_basis(recompute=True) plt.plot(G.e[0:10]); G.set_coordinates(G.U[:, 1:3]) G.plot() df_nf = pd.read_csv('Saved_Datasets/NewFeaturesDataset.csv') df_features = pd.merge(df_nf,df3,on=['imdb_id','title'],how='right') labels = preprocessing.LabelEncoder().fit_transform(df_features['success']) G.set_coordinates(G.U[:,1:3]) G.plot_signal(labels, vertex_size=20) NEIGHBORS = 200 #sort the order of the weights sort_order = np.argsort(w, axis = 1) #declaration of a sorted weight matrix sorted_weights = np.zeros((len(w), len(w))) for i in range (0, len(w)): for j in range(0, len(w)): if (j >= len(w) - NEIGHBORS): #copy the k strongest edges for each node sorted_weights[i, sort_order[i,j]] = w[i,sort_order[i,j]] else: #set the other edges to zero sorted_weights[i, sort_order[i,j]] = 0 #ensure the matrix is symmetric bigger = sorted_weights.transpose() > sorted_weights sorted_weights = sorted_weights - sorted_weights*bigger + sorted_weights.transpose()*bigger plt.spy(sorted_weights); G = graphs.Graph(sorted_weights) G.compute_laplacian('normalized') G.compute_fourier_basis(recompute=True) plt.plot(G.e[0:10]); G.set_coordinates(G.U[:, 1:3]) G.plot() G.plot_signal(labels, vertex_size=20) """ Explanation: Weight matrix: End of explanation """
gcrahay/otx_misp
src/otx_misp/otx/howto_use_python_otx_api.ipynb
apache-2.0
pulses = otx.getall() len(pulses) """ Explanation: Replace YOUR_KEY with your OTX API key. You can find it in your settings page https://otx.alienvault.com/settings The getall() method downloads all the OTX pulses and their assocciated indicators of compromise (IOCs) from your account. This includes all of the following: - OTX pulses to which you subscribed through the web UI - Pulses created by OTX users to whom you subscribe - OTX pulses you created. If this is the first time you are using your account, the download includes all pulses created by AlienVault. All users are subscribed to these by default. End of explanation """ json_normalize(pulses)[0:5] """ Explanation: Let's list a few pulses End of explanation """ json_normalize(pulses[1]["indicators"]) """ Explanation: author_name: The username of the OTX User that created the pulse created: Date when the pulse was created in the system description: Describes the pulse in terms of the type of threat it poses, and any other facts that may link it to other threat indicators. id: Unique identifier of the pulse indicators: Collection of Indicators Of Compromise modified: Date when the pulse was last modified name: Name of the pulse references: List of references to papers, websites or blogs related to the threat described in the pulse revision: Revision number that increments each time pulse contents change tags: List of tags that provide information about pulse content, for example, Phshing, malware, C&C, and apt. Let's explore the indicators object: End of explanation """ indicator_types = [ { "name": "IPv4", "description": "An IPv4 address indicating the online location of a server or other computer." }, { "name": "IPv6", "description": "An IPv6 address indicating the online location of a server or other computer." }, { "name": "domain", "description": "A domain name for a website or server. Domains encompass a series of hostnames." }, { "name": "hostname", "description": "The hostname for a server located within a domain." }, { "name": "email", "description": "An email associated with suspicious activity." }, { "name": "URL", "description": " Uniform Resource Location (URL) summarizing the online location of a file or resource." }, { "name": "URI", "description": "Uniform Resource Indicator (URI) describing the explicit path to a file hosted online." }, { "name": "FileHash-MD5", "description": "A MD5-format hash that summarizes the architecture and content of a file." }, { "name": "FileHash-SHA1", "description": "A SHA-format hash that summarizes the architecture and content of a file." }, { "name": "FileHash-SHA256", "description": "A SHA-256-format hash that summarizes the architecture and content of a file." }, { "name": "FileHash-PEHASH", "description": "A PEPHASH-format hash that summarizes the architecture and content of a file." }, { "name": "FileHash-IMPHASH", "description": "An IMPHASH-format hash that summarizes the architecture and content of a file." }, { "name": "CIDR", "description": "Classless Inter-Domain Routing (CIDR) address, which describes both a server's IP address and the network architecture (routing path) surrounding that server." }, { "name": "FilePath", "description": "A unique location in a file system." }, { "name": "Mutex", "description": "The name of a mutex resource describing the execution architecture of a file." }, { "name": "CVE", "description": "Common Vulnerability and Exposure (CVE) entry describing a software vulnerability that can be exploited to engage in malicious activity." }] json_normalize(indicator_types) mtime = (datetime.now() - timedelta(days=1)).isoformat() mtime """ Explanation: _id: Unique identifier of the IOC created: Date IOC was added to the pulse description: Describe the Indicator Of Compromise indicator: The IOC indicator_type: Type of indicator The following Indicator Types are supported: End of explanation """ events = otx.getevents_since(mtime) json_normalize(events) """ Explanation: Besides receiving the pulse information, there is another function that can retrieve different events that are ocurring in the OTX system and affect your account. End of explanation """
noppanit/social-network-analysis
Centralities.ipynb
mit
%matplotlib inline import networkx as nx import matplotlib.pyplot as plt import operator import timeit g_fb = nx.read_edgelist('facebook_combined.txt', create_using = nx.Graph(), nodetype = int) print nx.info(g_fb) print nx.is_directed(g_fb) """ Explanation: Centralities In this section, I'm going to learn how Centrality works and try to interpret the data based on small real dataset. I'm using Facebook DataSet from SNAP https://snap.stanford.edu/data/egonets-Facebook.html. The data is included in this repository for easier access. The data is in EdgeList format (source, target). I'm going to use Networkx, iGraph and graph_tool to find all the centralities. End of explanation """ dg_centrality = nx.degree_centrality(g_fb) sorted_dg_centrality = sorted(dg_centrality.items(), key=operator.itemgetter(1), reverse=True) sorted_dg_centrality[:10] """ Explanation: Now let's find the celebrities. The most basic centrality is Degree Centrality which is the sum of all in and out nodes (in the case of directed graph). End of explanation """ nx.degree(g_fb, [107]) """ Explanation: We can see that the node 107 has the highest degree centrality which means node 107 has the highest number of connected nodes. We can prove this by getting the degree of node 107 to see how many friends of node 107 has End of explanation """ float(nx.degree(g_fb, [107]).values()[0]) / g_fb.number_of_nodes() """ Explanation: Node 107 has 1045 friends and we can divide that by number of nodes to get the normalized degree centrality End of explanation """ from multiprocessing import Pool import itertools def partitions(nodes, n): "Partitions the nodes into n subsets" nodes_iter = iter(nodes) while True: partition = tuple(itertools.islice(nodes_iter,n)) if not partition: return yield partition def btwn_pool(G_tuple): return nx.betweenness_centrality_source(*G_tuple) def between_parallel(G, processes = None): p = Pool(processes=processes) part_generator = 4*len(p._pool) node_partitions = list(partitions(G.nodes(), int(len(G)/part_generator))) num_partitions = len(node_partitions) bet_map = p.map(btwn_pool, zip([G]*num_partitions, [True]*num_partitions, [None]*num_partitions, node_partitions)) bt_c = bet_map[0] for bt in bet_map[1:]: for n in bt: bt_c[n] += bt[n] return bt_c """ Explanation: Degree centrality might be the easiest number to calculate but it only shows the number of nodes connected which in real social network it might not be very useful as you might have a million followers but if the majority of them is bots then the number is not telling anything new. Now let's try Betweenness which count all of the shortest path going throw each now. This might mean that if you have the highest shortest path going through you, you might be considered as bridge of your entire network. Nodes with high betweenness are important in communication and information diffusion We will be using multiprocessing so we can parallel the computation and distribute the load. End of explanation """ start = timeit.default_timer() bt = between_parallel(g_fb) stop = timeit.default_timer() top = 10 max_nodes = sorted(bt.iteritems(), key = lambda v: -v[1])[:top] bt_values = [5]*len(g_fb.nodes()) bt_colors = [0]*len(g_fb.nodes()) for max_key, max_val in max_nodes: bt_values[max_key] = 150 bt_colors[max_key] = 2 print 'It takes {} seconds to finish'.format(stop - start) print max_nodes """ Explanation: Let's try with multiprocesser. End of explanation """ start = timeit.default_timer() bt = nx.betweenness_centrality(g_fb) stop = timeit.default_timer() top = 10 max_nodes = sorted(bt.iteritems(), key = lambda v: -v[1])[:top] bt_values = [5]*len(g_fb.nodes()) bt_colors = [0]*len(g_fb.nodes()) for max_key, max_val in max_nodes: bt_values[max_key] = 150 bt_colors[max_key] = 2 print 'It takes {} seconds to finish'.format(stop - start) print max_nodes """ Explanation: Now let's try with just one processor End of explanation """ g_fb_pr = nx.pagerank(g_fb) top = 10 max_pagerank = sorted(g_fb_pr.iteritems(), key = lambda v: -v[1])[:top] max_pagerank """ Explanation: Page rank We're going to try PageRank algorithm. This is very similar to Google's PageRank which they use incoming links to determine the "popularity" End of explanation """ g_fb_eg = nx.eigenvector_centrality(g_fb) top = 10 max_eg = sorted(g_fb_eg.iteritems(), key = lambda v: -v[1])[:top] max_eg """ Explanation: We can see that now the score is different as node 3437 is more popular than node 107. Who is a "Gray Cardinal" There's another metric that we can measure most influential node. It's called eigenvector centrality. To put it simply it means that if you're well connected to a lot of important people that means you're important or most influential as well. End of explanation """ from igraph import * import timeit igraph_fb = Graph.Read_Edgelist('facebook_combined.txt', directed=False) print igraph_fb.summary() """ Explanation: Now we get quite a different result. This would mean that node 1912 is connected to more important people in the entire network that means that node is more influential than the rest of the network. iGraph with SNAP Facebook Dataset Networkx is easy to install and great to start with. However, as it's written in Python it's quite slow. I'm going to try iGraph which is C based. I'm hoping that this would yield the same result but faster. End of explanation """ def betweenness_centralization(G): vnum = G.vcount() if vnum < 3: raise ValueError("graph must have at least three vertices") denom = (vnum-1)*(vnum-2) temparr = [2*i/denom for i in G.betweenness()] return temparr start = timeit.default_timer() igraph_betweenness = betweenness_centralization(igraph_fb) stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) igraph_betweenness.sort(reverse=True) print igraph_betweenness[:10] """ Explanation: Betweenness End of explanation """ start = timeit.default_timer() igraph_closeness = igraph_fb.closeness() stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) igraph_closeness.sort(reverse=True) print igraph_closeness[:10] """ Explanation: Closeness End of explanation """ start = timeit.default_timer() igraph_eg = igraph_fb.evcent() stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) igraph_eg.sort(reverse=True) print igraph_eg[:10] """ Explanation: Eigen Value End of explanation """ start = timeit.default_timer() igraph_pr = igraph_fb.pagerank() stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) igraph_pr.sort(reverse=True) print igraph_pr[:10] """ Explanation: PageRank End of explanation """ import sys from graph_tool.all import * import timeit show_config() graph_tool_fb = Graph(directed=False) with open('facebook_combined.txt', 'r') as f: for line in f: edge_list = line.split() source, target = tuple(edge_list) graph_tool_fb.add_edge(source, target) print graph_tool_fb.num_vertices() print graph_tool_fb.num_edges() """ Explanation: We can see that iGraph yields similar result from networkx but it's a lot quicker in the same machine. Graph_tool with SNAP Facebook Dataset I'm going to try another library which is supposed to be the fastest than networkx and igraph. Graph_tool is also C based which it has OpenMP enabled so a lot of algorithms is multiprocessing. End of explanation """ start = timeit.default_timer() vertext_betweenness, edge_betweenness = betweenness(graph_tool_fb) stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) vertext_betweenness.a[107] """ Explanation: Betweeness End of explanation """ start = timeit.default_timer() v_closeness = closeness(graph_tool_fb) stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) v_closeness.a[107] """ Explanation: Closeness End of explanation """ start = timeit.default_timer() v_closeness = eigenvector(graph_tool_fb) stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) """ Explanation: Eigenvalue End of explanation """ start = timeit.default_timer() v_closeness = pagerank(graph_tool_fb) stop = timeit.default_timer() print 'It takes {} seconds to finish'.format(stop - start) """ Explanation: Page Rank End of explanation """ %matplotlib inline import random as r import networkx as nx import matplotlib.pyplot as plot class Person(object): def __init__(self, id): #Start with a single initial preference self.id = id self.i = r.random() self.a = self.i # we value initial opinion and subsequent information equally self.alpha = 0.8 def __str__(self): return (str(self.id)) def step(self): # loop through the neighbors and aggregate their preferences neighbors = g[self] # all nodes in the list of neighbors are equally weighted, including self w = 1/float((len(neighbors) + 1 )) s = w * self.a for node in neighbors: s += w * node.a # update my beliefs = initial belief plus sum of all influences self.a = (1 - self.alpha) * self.i + self.alpha * s density = 0.9 g = nx.Graph() ## create a network of Person objects for i in range(10): p = Person(i) g.add_node(p) ## this will be a simple random graph, every pair of nodes has an ## equal probability of connection for x in g.nodes(): for y in g.nodes(): if r.random() <= density: g.add_edge(x,y) ## draw the resulting graph and color the nodes by their value col = [n.a for n in g.nodes()] pos = nx.spring_layout(g) nx.draw_networkx(g, pos=pos, node_color=col) ## repeat for 30 times periods for i in range(30): ## iterate through all nodes in the network and tell them to make a step for node in g.nodes(): node.step() ## collect new attitude data, print it to the terminal and plot it. col = [n.a for n in g.nodes()] print col plot.plot(col) class Influencer(Person): def __ini__(self, id): self.id = id self.i = r.random() self.a = 1 ## opinion is strong and immovable def step(self): pass influencers = 2 connections = 4 ## add the influencers to the network and connect each to 3 other nodes for i in range(influencers): inf = Influencer("Inf" + str(i)) for x in range(connections): g.add_edge(r.choice(g.nodes()), inf) ## repeat for 30 time periods for i in range(30): ## iterate through all nodes in the network and tell them to make a step for node in g.nodes(): node.step() ## collect new attitude data, print it to the terminal and plot it. col = [n.a for n in g.nodes()] #print col plot.plot(col) """ Explanation: Information diffusion modelling I'm going to information diffusion model to simulate how information travels in the graph. End of explanation """ import copy import networkx as nx import random def independent_cascade(G, seeds, steps = 0): """ "Return the active nodes of each diffusion step by the independent cascade model Parameters -- -- -- -- -- - G: graph A NetworkX graph seeds: list of nodes The seed nodes for diffusion steps: integer The number of steps to diffuse.If steps <= 0, the diffusion runs until no more nodes can be activated.If steps > 0, the diffusion runs for at most "steps" rounds Returns -- -- -- - layer_i_nodes: list of list of activated nodes layer_i_nodes[0]: the seeds layer_i_nodes[k]: the nodes activated at the kth diffusion step Notes -- -- - When node v in G becomes active, it has a * single * chance of activating each currently inactive neighbor w with probability p_ { vw } Examples -- -- -- -- >>> DG = nx.DiGraph() >>> DG.add_edges_from([(1, 2), (1, 3), (1, 5), (2, 1), (3, 2), (4, 2), (4, 3), \ >>> (4, 6), (5, 3), (5, 4), (5, 6), (6, 4), (6, 5)], act_prob = 0.2) >>> H = nx.independent_cascade(DG, [6]) References -- -- -- -- --[1] David Kempe, Jon Kleinberg, and Eva Tardos. Influential nodes in a diffusion model for social networks. In Automata, Languages and Programming, 2005. """ if type(G) == nx.MultiGraph or type(G) == nx.MultiDiGraph: raise Exception(\ "independent_cascade() is not defined for graphs with multiedges.") # make sure the seeds are in the graph for s in seeds: if s not in G.nodes(): raise Exception("seed", s, "is not in graph") # change to directed graph if not G.is_directed(): DG = G.to_directed() else: DG = copy.deepcopy(G) # init activation probabilities for e in DG.edges(): if 'act_prob' not in DG[e[0]][e[1]]: DG[e[0]][e[1]]['act_prob'] = 0.1 elif DG[e[0]][e[1]]['act_prob'] > 1: raise Exception("edge activation probability:", DG[e[0]][e[1]]['act_prob'], "cannot be larger than 1") # perform diffusion A = copy.deepcopy(seeds)# prevent side effect if steps <= 0: #perform diffusion until no more nodes can be activated return _diffuse_all(DG, A)# perform diffusion for at most "steps" rounds return _diffuse_k_rounds(DG, A, steps) def _diffuse_all(G, A): tried_edges = set() layer_i_nodes = [ ] layer_i_nodes.append([i for i in A]) # prevent side effect while True: len_old = len(A) (A, activated_nodes_of_this_round, cur_tried_edges) = _diffuse_one_round(G, A, tried_edges) layer_i_nodes.append(activated_nodes_of_this_round) tried_edges = tried_edges.union(cur_tried_edges) if len(A) == len_old: break return layer_i_nodes def _diffuse_k_rounds(G, A, steps): tried_edges = set() layer_i_nodes = [ ] layer_i_nodes.append([i for i in A]) while steps > 0 and len(A) < len(G): len_old = len(A) (A, activated_nodes_of_this_round, cur_tried_edges) = _diffuse_one_round(G, A, tried_edges) layer_i_nodes.append(activated_nodes_of_this_round) tried_edges = tried_edges.union(cur_tried_edges) if len(A) == len_old: break steps -= 1 return layer_i_nodes def _diffuse_one_round(G, A, tried_edges): activated_nodes_of_this_round = set() cur_tried_edges = set() for s in A: for nb in G.successors(s): if nb in A or (s, nb) in tried_edges or (s, nb) in cur_tried_edges: continue if _prop_success(G, s, nb): activated_nodes_of_this_round.add(nb) cur_tried_edges.add((s, nb)) activated_nodes_of_this_round = list(activated_nodes_of_this_round) A.extend(activated_nodes_of_this_round) return A, activated_nodes_of_this_round, cur_tried_edges def _prop_success(G, src, dest): return random.random() <= G[src][dest]['act_prob'] run_times = 10 G = nx.DiGraph() G.add_edge(1,2,act_prob=.5) G.add_edge(2,1,act_prob=.5) G.add_edge(1,3,act_prob=.2) G.add_edge(3,1,act_prob=.2) G.add_edge(2,3,act_prob=.3) G.add_edge(2,4,act_prob=.5) G.add_edge(3,4,act_prob=.1) G.add_edge(3,5,act_prob=.2) G.add_edge(4,5,act_prob=.2) G.add_edge(5,6,act_prob=.6) G.add_edge(6,5,act_prob=.6) G.add_edge(6,4,act_prob=.3) G.add_edge(6,2,act_prob=.4) nx.draw_networkx(G) independent_cascade(G, [1], steps=0) n_A = 0.0 for i in range(run_times): A = independent_cascade(G, [1], steps=1) print A for layer in A: n_A += len(layer) n_A / run_times #assert_almost_equal(n_A / run_times, 1.7, places=1) """ Explanation: Networkx Independent Cascade Model End of explanation """
kraemerd17/kraemerd17.github.io
courses/python/material/ipynbs/Time Series.ipynb
mit
from __future__ import division from pandas import Series, DataFrame import pandas as pd from numpy.random import randn import numpy as np pd.options.display.max_rows = 12 np.set_printoptions(precision=4, suppress=True) import matplotlib.pyplot as plt plt.rc('figure', figsize=(12, 4)) %matplotlib inline """ Explanation: Time series From Python for Data Analysis: Time series data is an important form of structured data in many different dielfds, such as finance, economics, ecology, neuroscience, and physics. Anything that is observed or measured at many points in time forms a time series. Many time series are fixed frequency, which is to say that data points occur at regular intervals according to some rule, such as every 15 seconds, every 5 minutes, or once per month. Time series can also be irregular without a fixed unit or time or offset between units. How you mark and refer to time series data depends on the application and you may have one of the following: timestamps, specific instants in time fixed periods, such as the month January 2007 or the full year 2010 intervals of time, indicated by a start and end timestamp. Periods can be thought of as special cases of intervals Experiment or elapsed time; each timestamp is a measure of time relative to a particular start time. For example, the diameter of a cookie baking each second since being placed in the oven Pandas provides a standard set of time series tools and data algorithms. With this you can efficiently work with very large time series and easily slice and dice, aggregate, and resample irregular and fixed frequency time series. As you might guess, many of these tools are especially useful for financial and economics applications, but you could certainly use them to analyze server log data, too. End of explanation """ from datetime import datetime now = datetime.now() now """ Explanation: Date and Time Data Types and Tools In general, dealing with date arithmetic is hard. Luckily, Python has a robust library that implements datetime objects, which handle all of the annoying bits of date manipulation in a powerful way. End of explanation """ now.year, now.month, now.day """ Explanation: Every datetime object has a year, month, and day field. End of explanation """ delta = datetime(2011, 1, 7) - datetime(2008, 6, 24, 8, 15) delta """ Explanation: You can do arithmetic on datetime objects, which produce timedelta objects. End of explanation """ delta.days delta.seconds """ Explanation: timedelta objects are very similar to datetime objects, with similar fields: End of explanation """ from datetime import timedelta start = datetime(2011, 1, 7) start + timedelta(12) start - 2 * timedelta(12) """ Explanation: As you expect, arithmetic between datetime and timedelta objects produce datetime objects. End of explanation """ stamp = datetime(2011, 1, 3) str(stamp) """ Explanation: Converting between string and datetime In general, it is easier to format a string from a datetime object than to parse a string date into a datetime object. End of explanation """ stamp.strftime('%Y-%m-%d') """ Explanation: To format a string from a datetime object, use the strftime method. You can use the standard string-formatting delimiters that are used in computing. End of explanation """ value = '2011-01-03' datetime.strptime(value, '%Y-%m-%d') """ Explanation: To parse a string into a datetime object, you can use the strptime method, along with the relevant format. End of explanation """ datestrs = ['7/6/2011', '8/6/2011'] [datetime.strptime(x, '%m/%d/%Y') for x in datestrs] """ Explanation: Of course, this being Python, we can easily abstract this process to list form using comprehensions. End of explanation """ from dateutil.parser import parse parse('2011-01-03') """ Explanation: Without question, datetime.strptime is the best way to parse a date, especially when you know the format a priori. However, it can be a bit annoying to have to write a format spec each time, especially for common date formats. In this case, you can use the parser.parse method in the third party dateutil package: End of explanation """ parse('Jan 31, 1997 10:45 PM') """ Explanation: dateutil is capable of parsing almost any human-intelligible date representation: End of explanation """ parse('6/12/2011', dayfirst=True) """ Explanation: In international locales, day appearing before month is very common, so you can pass dayfirst=True to indicate this: End of explanation """ datestrs pd.to_datetime(datestrs) """ Explanation: Pandas is generally oriented toward working with arrays of dates, whether used as an index or a column in a DataFrame. The to_datetime method parses many different kinds of date representations. Standard date formats like ISO8601 can be parsed very quickly. End of explanation """ idx = pd.to_datetime(datestrs + [None]) idx idx[2] pd.isnull(idx) """ Explanation: Notice that the Pandas object at work behind the scenes here is the DatetimeIndex, which is a subclass of Index. More on this later. to_datetime also handles values that should be considered missing (None, empty string, etc.): End of explanation """ from datetime import datetime dates = [datetime(2011, 1, 2), datetime(2011, 1, 5), datetime(2011, 1, 7), datetime(2011, 1, 8), datetime(2011, 1, 10), datetime(2011, 1, 12)] ts = Series(np.random.randn(6), index=dates) ts """ Explanation: datetime objects also have a number of locale-specific formatting options for systems in other countries or languages. For example, the abbreviated month names will be different on German or French systems compared with English systems. Time Series Basics The most basic kind of time series object in Pandas is a Series indexed by timestamps, which is often represented external to Pandas as Python strings or datetime objects. End of explanation """ type(ts) # note: output changed to "pandas.core.series.Series" ts.index """ Explanation: Under the hood, these datetime objects have been put in a DatetimeIndex, and the variable ts is now of type TimeSeries. End of explanation """ ts + ts[::2] """ Explanation: Like other Series, arithmetic operations between differently-indexed time series automatically align on the dates: End of explanation """ ts.index.dtype # note: output changed from dtype('datetime64[ns]') to dtype('<M8[ns]') """ Explanation: Pandas stores timestamps using NumPy's datetime64 date type at the nanosecond resolution: End of explanation """ stamp = ts.index[0] stamp # note: output changed from <Timestamp: 2011-01-02 00:00:00> to Timestamp('2011-01-02 00:00:00') """ Explanation: Scalar values from a DatetimeIndex are Pandas Timestamp objects End of explanation """ stamp = ts.index[2] ts[stamp] """ Explanation: A Timestamp can be substituted anywhere you would use a datetime object. Additionally, it can store frequency information (if any) and understands how to do time zone conversions and other kinds of manipulations. More on both of these things later. Indexing, selection, subsetting TimeSeries is a subclass of Series and thus behaves in the same way with regard to indexing and selecting data based on label: End of explanation """ ts['1/10/2011'] ts['20110110'] """ Explanation: As a convenience, you can also pass a string that is interpretable as a date: End of explanation """ longer_ts = Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) longer_ts longer_ts['2001'] longer_ts['2001-05'] """ Explanation: For longer time series, a year or only a year and month can be passed to easily select slices of data: End of explanation """ ts[datetime(2011, 1, 7):] """ Explanation: Slicing with dates works just like with a regular Series End of explanation """ ts ts['1/6/2011':'1/11/2011'] """ Explanation: Because most time series data is ordered chronologically, you can slice with timestamps not contained in a time series to perform a range query: End of explanation """ ts.truncate(after='1/9/2011') """ Explanation: As before you can pass either a string date, datetime, or Timestamp. Remember that slicing in this manner produces views on the source time series just like slicing NumPy arrays. There is an equivalent instance method truncate which slices a TimeSeries between two dates: End of explanation """ dates = pd.date_range('1/1/2000', periods=100, freq='W-WED') long_df = DataFrame(np.random.randn(100, 4), index=dates, columns=['Colorado', 'Texas', 'New York', 'Ohio']) long_df.ix['5-2001'] """ Explanation: All of the above holds true for DataFrame as well, indexing on its rows: End of explanation """ dates = pd.DatetimeIndex(['1/1/2000', '1/2/2000', '1/2/2000', '1/2/2000', '1/3/2000']) dup_ts = Series(np.arange(5), index=dates) dup_ts """ Explanation: Time series with duplicate indices In some applications, there may be multiple data observations falling on a particular timestamp. Here is an example: End of explanation """ dup_ts.index.is_unique """ Explanation: We can tell that the index is not unique by checking its is_unique property: End of explanation """ dup_ts['1/3/2000'] # not duplicated dup_ts['1/2/2000'] # duplicated """ Explanation: Indexing into this time series will now either produce scalar values or slices depending on whether a timestamp is duplicated: End of explanation """ grouped = dup_ts.groupby(level=0) grouped.mean() grouped.count() """ Explanation: Suppose you want to aggregate the data having non-unique timestamps. One way to do this is to use groupby and pass level=0 (the only level of indexing!): End of explanation """ ts ts.resample('D') """ Explanation: Date ranges, Frequencies, and Shifting Generic time series in Pandas are assumed to be irregular; that is, they have no fixed frequency. For many applications this is sufficient. However, it's often desirable to work relative to a fixed frequency, such as daily, monthly, or even 15 minutes, even if that means introducing missing values into a time series. Fortunately Pandas has a full suite of standard time series frequencies and tools for resampling, inferring frequencies, and generating fixed frequency date ranges. For example, in the example time series, converting it to be fixed daily frequency can be accomplished by calling resample: End of explanation """ index = pd.date_range('4/1/2012', '6/1/2012') index """ Explanation: Conversion between frequencies or resampling is a big enough topic to have its own section later. Here, we'll see how to use the base frequencies and multiples thereof. Generating date ranges You may have guessed that pandas.date_range is responsible for generating a DatetimeIndex with an indicated length according to a particular frequency: End of explanation """ pd.date_range(start='4/1/2012', periods=20) pd.date_range(end='6/1/2012', periods=20) """ Explanation: By default, date_range generates daily timestamps. If you pass only a start or end date, you must pass a number of periods to generate: End of explanation """ pd.date_range('1/1/2000', '12/1/2000', freq='BM') """ Explanation: The start and end dates define strict boundaries for the generated date index. For example, if you wanted a date index containing the last business day of each month, you would pass the 'BM' frequency (business end of month) and only dates falling on or inside the date interval will be included: End of explanation """ pd.date_range('5/2/2012 12:56:31', periods=5) """ Explanation: date_range by default preserves the time (if any) or the start or end timestamp: End of explanation """ pd.date_range('5/2/2012 12:56:31', periods=5, normalize=True) """ Explanation: Sometimes you will have start or end dates with time information but want to generate a set of timestamps normalized to midnight as a convention. To do this, there is a normalize option: End of explanation """ from pandas.tseries.offsets import Hour, Minute hour = Hour() hour """ Explanation: Frequencies and Date Offsets Frequencies in Pandas are composed of a base frequency and a multiplier. Base frequencies are typically referred to by a string alias, like 'M' for monthly or 'H' for hourly. For each base frequency, there is an object defined generally referred to as a date offset. For each example, hourly frequency can be represented with the Hour class: End of explanation """ four_hours = Hour(4) four_hours """ Explanation: You can define a multiple of an offset by passing an integer: End of explanation """ pd.date_range('1/1/2000', '1/3/2000 23:59', freq='4h') """ Explanation: In most applications, you would never need to explicitly create one of these objects, instead using a string alias like 'H' or '4H'. Putting an integer before the base frequency creates a multiple: End of explanation """ Hour(2) + Minute(30) """ Explanation: Many offsets can be combined together by addition: End of explanation """ pd.date_range('1/1/2000', periods=10, freq='1h30min') """ Explanation: Similarly, you can pass frequency strings like '2h30min' which will effectively be parsed to the same expression. End of explanation """ rng = pd.date_range('1/1/2012', '9/1/2012', freq='WOM-3FRI') list(rng) """ Explanation: Some frequencies describe points in time that are not evenly spaced. For example, 'M' (calendar month end) and 'BM' (last business/weekday of month) depend on the number of days in a month and, in the latter case, whether the month ends on a weekend or not. For lack of a better term, we will call these anchored offsets. Week of month dates One useful frequency class is "week of month", starting with WOM. This enables you to get dates like the third Friday of each month: End of explanation """ ts = Series(np.random.randn(4), index=pd.date_range('1/1/2000', periods=4, freq='M')) ts ts.shift(2) ts.shift(-2) """ Explanation: Traders of US equity options will recognize thse dates as the standard dates of monthly expiry. Shifting (leading and lagging) data "Shifting" refers to moving data backward and forward through time. Both Series and DataFrame have a shift method for doing naive shifts forward or backward, leaving the index unmodified: End of explanation """ ts.shift(2, freq='M') """ Explanation: A common use of shift is computing percent changes in a time series or multiple time series as DataFrame columns. This is expressed as Because naive shifts leave the index unmodified, some data is discarded. Thus if the frequency is known, it can be passed to shift to advance the timestamps instead of simply the data End of explanation """ ts.shift(3, freq='D') ts.shift(1, freq='3D') ts.shift(1, freq='90T') """ Explanation: Other frequencies can be passed, too, giving you a lot of flexibility in how to lead and lag the data End of explanation """ from pandas.tseries.offsets import Day, MonthEnd now = datetime(2011, 11, 17) now + 3 * Day() """ Explanation: Shifting dates with offsets The Pandas date offsets can also be used with datetime or Timestamp objects: End of explanation """ now + MonthEnd() now + MonthEnd(2) """ Explanation: If you add an anchored offset like MonthEnd, the first increment will roll forward a date to the next date according to the frequency rule: End of explanation """ offset = MonthEnd() offset.rollforward(now) offset.rollback(now) """ Explanation: Anchored offsets can explicitly "roll" dates forward or backward using their rollforward and rollback methods, respectively: End of explanation """ ts = Series(np.random.randn(20), index=pd.date_range('1/15/2000', periods=20, freq='4d')) ts.groupby(offset.rollforward).mean() """ Explanation: A clever use of date offsets is to use these methods with groupby: End of explanation """ ts.resample('M', how='mean') """ Explanation: Of course, an easier and faster way to do this is using resample (more on this to come). End of explanation """ import pytz pytz.common_timezones[-5:] """ Explanation: Time Zone Handling Working with time zones is a pain. As Americans hold on dearly to daylight savings time, we must pay the price with difficult conversions between time zones. Many time series users choose to work with time series in coordinated universal time (UTC) of which time zones can be expressed as offsets. In Python we can use the pytz library, based off the Olson database of world time zone data. End of explanation """ tz = pytz.timezone('US/Eastern') tz """ Explanation: To get a time zone object from pytz, use pytz.timezone. End of explanation """ rng = pd.date_range('3/9/2012 9:30', periods=6, freq='D') ts = Series(np.random.randn(len(rng)), index=rng) """ Explanation: Methods in Pandas will accept either time zone names or these objects. Using the names is recommended. Localization and Conversion By default, time series in Pandas are time zone naive. Consider the following time series: End of explanation """ print(ts.index.tz) """ Explanation: The index's tz field is None: End of explanation """ pd.date_range('3/9/2012 9:30', periods=10, freq='D', tz='UTC') """ Explanation: Date ranges can be generated with a time zone set: End of explanation """ ts_utc = ts.tz_localize('UTC') ts_utc ts_utc.index """ Explanation: Conversion from naive to localized is handled by the tz_localize method End of explanation """ ts_utc.tz_convert('US/Eastern') """ Explanation: Once a time series has been localized to a particular time zone, it can be converted to another time zone using tz_convert. End of explanation """ ts_eastern = ts.tz_localize('US/Eastern') ts_eastern.tz_convert('UTC') ts_eastern.tz_convert('Europe/Berlin') """ Explanation: In this case of the above time series, which straddles a DST transition in the US/Eastern time zone, we could localize to EST and convert to, say, UTC or Berlin time. End of explanation """ ts.index.tz_localize('Asia/Shanghai') """ Explanation: tz_localize and tz_convert are also instance methods on DatetimeIndex. End of explanation """ stamp = pd.Timestamp('2011-03-12 04:00') stamp_utc = stamp.tz_localize('utc') stamp_utc.tz_convert('US/Eastern') """ Explanation: Operations with time zone-aware Timestamp objects Similar to time series and date ranges, individual Timestamp objects similarly can be localized from naive to time zone-aware and converted from one time zone to another: End of explanation """ stamp_moscow = pd.Timestamp('2011-03-12 04:00', tz='Europe/Moscow') stamp_moscow """ Explanation: You can also pass a time zone when creating the Timestamp. End of explanation """ stamp_utc.value stamp_utc.tz_convert('US/Eastern').value """ Explanation: Time zone-aware Timestamp objects internally store a UTC timestamp value as nanoseconds since the UNIX epoch (January 1, 1970); this UTC value is invariant between time zone conversions: End of explanation """ # 30 minutes before DST transition from pandas.tseries.offsets import Hour stamp = pd.Timestamp('2012-03-12 01:30', tz='US/Eastern') stamp stamp + Hour() # 90 minutes before DST transition stamp = pd.Timestamp('2012-11-04 00:30', tz='US/Eastern') stamp stamp + 2 * Hour() """ Explanation: When performing time arithmetic using Pandas' DateOffset objects, daylight savings time transitions are respected where possible End of explanation """ rng = pd.date_range('3/7/2012 9:30', periods=10, freq='B') ts = Series(np.random.randn(len(rng)), index=rng) ts ts1 = ts[:7].tz_localize('Europe/London') ts2 = ts1[2:].tz_convert('Europe/Moscow') result = ts1 + ts2 result.index """ Explanation: Operations between different time zones If two time series with different time zones are combined, the result will be UTC. Since the timestamps are stored under the hood in UTC, this is a straightforward operation and requires no conversion to happen. End of explanation """
eds-uga/csci1360-fa16
assignments/A2/A2_Q2.ipynb
mit
def return_ordinals(numbers): out_list = [] ### BEGIN SOLUTION ### END SOLUTION return out_list inlist = [5, 6, 1, 9, 5, 5, 3, 3, 9, 4] outlist = ["5th", "6th", "1st", "9th", "5th", "5th", "3rd", "3rd", "9th", "4th"] for y_true, y_pred in zip(outlist, return_ordinals(inlist)): assert y_true == y_pred.lower() inlist = [7, 5, 6, 6, 3, 5, 1, 0, 5, 2] outlist = ["7th", "5th", "6th", "6th", "3rd", "5th", "1st", "0th", "5th", "2nd"] for y_true, y_pred in zip(outlist, return_ordinals(inlist)): assert y_true == y_pred.lower() """ Explanation: Q2 In this question, we'll look at using conditionals to change the behavior of code. A In this question, you'll write a method that takes a list of numbers [0-9] and returns a corresponding list with the "ordinal" versions. That is, if you see a 1 in the list, you'll create a string "1st". If you see a 3, you'll create a string "3rd", and so on. For example, if you receive [2, 1, 4, 3, 4] as input, you should create a list of strings that looks like this: ["2nd", "1st", "4th", "3rd", "4th"]. End of explanation """ def median(numbers): med_num = 0 ### BEGIN SOLUTION ### END SOLUTION return med_num inlist = [ 35.20575598, 45.05634995, 45.42573818, 55.07275661, 66.42501038, 66.48337884, 73.59004688, 81.09609177, 87.67779046, 93.90508029] outmed = 66 assert outmed == int(median(inlist)) inlist = [ 12899.59248764, 19792.31177415, 31156.00415682, 31764.93625914, 41443.07238461, 50669.10268086, 55408.34012113, 61352.47232585, 72682.91992934, 86883.37175784] outmed = 46056 assert outmed == int(median(inlist)) """ Explanation: B In this question, you'll write code that computes the median of a sorted list of numbers. You can assume the list you receive is already sorted in ascending order (least to greatest). If your input list is [1, 1, 2, 4, 7, 7, 8], then your output should be 4. Recall the rule about median: if you get a list with an even number of elements, you should return the average of the two middle ones. Store your answer in the variable med_num. End of explanation """ def mode(numbers): mode_num = 0 ### BEGIN SOLUTION ### END SOLUTION return mode_num l1 = [5, 1, 3, 1, 2, 5, 1] a1 = 1 assert mode(l1) == a1 l2 = [1, 2, 3, 1, 2, 3, 1, 2, 3] a2 = mode(l2) assert a2 == 1 or a2 == 2 or a2 == 3 """ Explanation: C In this question, you'll write code to find the mode of a list of numbers. Recall that the mode is the number that occurs most frequently. Ties can be broken arbitrarily (meaning you can pick whichever number among those tied for the most frequent). If your input list is [5, 1, 3, 1, 2, 5, 1], you should return 1. Store your answer in the variable mode_num. End of explanation """
bradleypallen/fb15k-akbc
FB15K-237 Evaluation.ipynb
mit
import pandas as pd import numpy as np from operator import itemgetter from CFModel import CFModel """ Explanation: Import packages End of explanation """ TEST_CSV_FILE = 'fb15k_test.csv' CVSC_ENTITIES_CSV_FILE = 'fb15k_cvsc_entities.csv' CVSC_PAIRS_CSV_FILE = 'fb15k_cvsc_pairs.csv' MODEL_WEIGHTS_FILE = 'test_weights.h5' K_FACTORS = 20 """ Explanation: Define constants End of explanation """ triples = pd.read_csv(TEST_CSV_FILE, sep='\t', usecols=['subj', 'rel', 'obj', 'pid', 'rid']) entities = pd.read_csv(CVSC_ENTITIES_CSV_FILE, sep='\t', usecols=['entity'])['entity'].values[1:] entity_pairs = pd.read_csv(CVSC_PAIRS_CSV_FILE, sep='\t', usecols=['subj', 'obj', 'pid']) """ Explanation: Load FB215-237 data End of explanation """ n_pairs = triples['pid'].max() + 1 m_relations = triples['rid'].max() + 1 l_entities = len(entities) print n_pairs, 'pairs,', m_relations, 'relations,', l_entities, 'entities' """ Explanation: Print basic dataset statistics End of explanation """ model = CFModel(n_pairs, m_relations, K_FACTORS) model.load_weights(MODEL_WEIGHTS_FILE) """ Explanation: Load model weights into evaluation model End of explanation """ def sp_query_reciprocal_rank(model, subj, rid, obj, entities): objs = [ result[0] for result in sp_query_results(model, subj, rid, entities) ] return reciprocal_rank(obj, objs) def sp_query_hits_at_10(model, subj, rid, obj, entities): objs = [ result[0] for result in sp_query_results(model, subj, rid, entities) ] if obj in objs[:10]: return 1.0 else: return 0.0 def sp_query_results(model, subj, rid, entities): return sorted([ [ subj, model.rank(pid, rid) ] for pid in sp_query_pairs(subj, entities) ], reverse=True, key=itemgetter(1)) def sp_query_pairs(subj, entities): return [ pair_id(subj, obj) for obj in entities if pair_id(subj, obj) > -1 ] def po_query_reciprocal_rank(model, subj, rid, obj, entities): subjs = [ result[0] for result in po_query_results(model, obj, rid, entities) ] return reciprocal_rank(subj, subjs) def po_query_hits_at_10(model, subj, rid, obj, entities): subjs = [ result[0] for result in po_query_results(model, obj, rid, entities) ] if subj in subjs[:10]: return 1.0 else: return 0.0 def po_query_results(model, obj, rid, entities): return sorted([ [ obj, model.rank(pid, rid) ] for pid in sp_query_pairs(subj, entities) ], reverse=True, key=itemgetter(1)) def po_query_pairs(obj, entities): return [ pair_id(subj, obj) for subj in entities if pair_id(subj, obj) > -1] def pair_id(subj, obj): pair = entity_pairs[(entity_pairs['subj'] == subj) & (entity_pairs['obj'] == obj)] if len(pair) > 0: return pair['pid'].values[0] else: return -1 def reciprocal_rank(correct_response, responses): return 1. / np.float(np.where(responses == correct_response)[0][0]) pairs = entity_pairs.to_dict(orient='records') subj_idx = {} obj_idx = {} for pair in pairs: subj = pair['subj'] obj = pair['obj'] pid = pair['pid'] if subj not in subj_idx.keys(): subj_idx[subj] = {} if obj not in obj_idx.keys(): obj_idx[obj] = {} subj_idx[subj][obj] = pid obj_idx[obj][subj] = pid tuples = triples.to_dict(orient='records') np.where(np.array(['fee', 'fi', 'foo', 'fum']) == 'wubba') for tuple in tuples: scores = [] subj = tuple['subj'] rid = tuple['rid'] obj = tuple['obj'] for entity in entities: if subj in subj_idx.keys() and entity in subj_idx[subj].keys(): pid = subj_idx[subj][entity] score = model.rate(pid, rid) scores.append([entity, score]) scores = sorted(scores, reverse=True, key=itemgetter(1)) results = [ x[0] for x in scores ] print obj, np.where(np.array(results) == obj)[0] triples.head(3) entity_pairs.head(5) x = [] for entity in entities: if entity in subj_idx['/m/01sl1q'].keys(): print entity, subj_idx['/m/01sl1q'][entity] len(x) model.rank len(triples) po_query_pairs('/m/027rn', entities) triples['sp_reciprocal_rank'] = sp_query_reciprocal_rank(model, triples['subj'], triples['rid'], triples['obj'], entities) triples['po_reciprocal_rank'] = po_query_reciprocal_rank(model, triples['subj'], triples['rid'], triples['obj'], entities) triples['sp_hits_at_10'] = sp_query_hits_at_10(model, triples['subj'], triples['rid'], triples['obj'], entities) triples['po_hits_at_10'] = po_query_hits_at_10(model, triples['subj'], triples['rid'], triples['obj'], entities) mrr = (triples['sp_reciprocal_rank'].sum() + triples['po_reciprocal_rank'].sum()) / (np.float(len(triples)) * 2.0) hits_at_10 = (triples['sp_hits_at_10'].sum() + triples['po_hits_at_10'].sum()) / (np.float(len(triples)) * 2.0) print 'Mean reciprocal rank:', mrr print 'HITS@10:', hits_at_10 """ Explanation: Execute evaluation protocol From [2]: Given a set of triples in a set disjoint from a training knowledge graph, we test models on predicting the subject or object of each triple, given the relation type and the other argument. We rank all entities in the training knowledge base in order of their likelihood of filling the argument position. We report the mean reciprocal rank of the correct entity, as well as HITS@10 – the percent of test triples for which the correct argument was ranked in the top ten. We use filtered measures following the protocol proposed in Bordes et al. (2013) – that is, when we rank entities for a given position, we remove all other entities that are known to be part of an existing triple in the training, development, or test set. This avoids penalizing the model for ranking other correct fillers higher than the tested argument. We thus report filtered mean reciprocal rank (labeled MRR in the Figures), and filtered HITS@10. In the figures we present MRR values scaled by 100, so that the maximum possible MRR is 100. Note: filtering not yet implemented, code neither complete nor debugged for non-filtering case anyways End of explanation """
mne-tools/mne-tools.github.io
0.14/_downloads/plot_xdawn_denoising.ipynb
bsd-3-clause
# Authors: Alexandre Barachant <alexandre.barachant@gmail.com> # # License: BSD (3-clause) from mne import (io, compute_raw_covariance, read_events, pick_types, Epochs) from mne.datasets import sample from mne.preprocessing import Xdawn from mne.viz import plot_epochs_image print(__doc__) data_path = sample.data_path() """ Explanation: XDAWN Denoising XDAWN filters are trained from epochs, signal is projected in the sources space and then projected back in the sensor space using only the first two XDAWN components. The process is similar to an ICA, but is supervised in order to maximize the signal to signal + noise ratio of the evoked response. <div class="alert alert-danger"><h4>Warning</h4><p>As this denoising method exploits the known events to maximize SNR of the contrast between conditions it can lead to overfitting. To avoid a statistical analysis problem you should split epochs used in fit with the ones used in apply method.</p></div> References [1] Rivet, B., Souloumiac, A., Attina, V., & Gibert, G. (2009). xDAWN algorithm to enhance evoked potentials: application to brain-computer interface. Biomedical Engineering, IEEE Transactions on, 56(8), 2035-2043. [2] Rivet, B., Cecotti, H., Souloumiac, A., Maby, E., & Mattout, J. (2011, August). Theoretical analysis of xDAWN algorithm: application to an efficient sensor selection in a P300 BCI. In Signal Processing Conference, 2011 19th European (pp. 1382-1386). IEEE. End of explanation """ raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.1, 0.3 event_id = dict(vis_r=4) # Setup for reading the raw data raw = io.read_raw_fif(raw_fname, preload=True) raw.filter(1, 20) # replace baselining with high-pass events = read_events(event_fname) raw.info['bads'] = ['MEG 2443'] # set bad channels picks = pick_types(raw.info, meg=True, eeg=False, stim=False, eog=False, exclude='bads') # Epoching epochs = Epochs(raw, events, event_id, tmin, tmax, proj=False, picks=picks, baseline=None, preload=True, verbose=False) # Plot image epoch before xdawn plot_epochs_image(epochs['vis_r'], picks=[230], vmin=-500, vmax=500) # Estimates signal covariance signal_cov = compute_raw_covariance(raw, picks=picks) # Xdawn instance xd = Xdawn(n_components=2, signal_cov=signal_cov) # Fit xdawn xd.fit(epochs) # Denoise epochs epochs_denoised = xd.apply(epochs) # Plot image epoch after Xdawn plot_epochs_image(epochs_denoised['vis_r'], picks=[230], vmin=-500, vmax=500) """ Explanation: Set parameters and read data End of explanation """
mjasher/gac
original_libraries/flopy-master/examples/Notebooks/lake_example.ipynb
gpl-2.0
%matplotlib inline import os import numpy as np import matplotlib.pyplot as plt import flopy.modflow as mf import flopy.utils as fu workspace = os.path.join('data') #make sure workspace directory exists if not os.path.exists(workspace): os.makedirs(workspace) """ Explanation: Lake Example First set the path and import the required packages. The flopy path doesn't have to be set if you install flopy from a binary installer. If you want to run this notebook, you have to set the path to your own flopy path. End of explanation """ name = 'lake_example' h1 = 100 h2 = 90 Nlay = 10 N = 101 L = 400.0 H = 50.0 k = 1.0 """ Explanation: We are creating a square model with a specified head equal to h1 along all boundaries. The head at the cell in the center in the top layer is fixed to h2. First, set the name of the model and the parameters of the model: the number of layers Nlay, the number of rows and columns N, lengths of the sides of the model L, aquifer thickness H, hydraulic conductivity k End of explanation """ ml = mf.Modflow(modelname=name, exe_name='mf2005', version='mf2005', model_ws=workspace) """ Explanation: Create a MODFLOW model and store it (in this case in the variable ml, but you can call it whatever you want). The modelname will be the name given to all MODFLOW files (input and output). The exe_name should be the full path to your MODFLOW executable. The version is either 'mf2k' for MODFLOW2000 or 'mf2005'for MODFLOW2005. End of explanation """ bot = np.linspace(-H/Nlay,-H,Nlay) delrow = delcol = L/(N-1) dis = mf.ModflowDis(ml,nlay=Nlay,nrow=N,ncol=N,delr=delrow,delc=delcol,top=0.0,botm=bot,laycbd=0) """ Explanation: Define the discretization of the model. All layers are given equal thickness. The bot array is build from the Hlay values to indicate top and bottom of each layer, and delrow and delcol are computed from model size L and number of cells N. Once these are all computed, the Discretization file is built. End of explanation """ Nhalf = (N-1)/2 ibound = np.ones((Nlay,N,N)) ibound[:,0,:] = -1; ibound[:,-1,:] = -1; ibound[:,:,0] = -1; ibound[:,:,-1] = -1 ibound[0,Nhalf,Nhalf] = -1 start = h1 * np.ones((N,N)) start[Nhalf,Nhalf] = h2 bas = mf.ModflowBas(ml,ibound=ibound,strt=start) """ Explanation: Next we specify the boundary conditions and starting heads with the Basic package. The ibound array will be 1 in all cells in all layers, except for along the boundary and in the cell at the center in the top layer where it is set to -1 to indicate fixed heads. The starting heads are used to define the heads in the fixed head cells (this is a steady simulation, so none of the other starting values matter). So we set the starting heads to h1 everywhere, except for the head at the center of the model in the top layer. End of explanation """ lpf = mf.ModflowLpf(ml, hk=k) """ Explanation: The aquifer properties (really only the hydraulic conductivity) are defined with the LPF package. End of explanation """ pcg = mf.ModflowPcg(ml) oc = mf.ModflowOc(ml) ml.write_input() ml.run_model() """ Explanation: Finally, we need to specify the solver we want to use (PCG with default values), and the output control (using the default values). Then we are ready to write all MODFLOW input files and run MODFLOW. End of explanation """ hds = fu.HeadFile(os.path.join(workspace, name+'.hds')) h = hds.get_data(kstpkper=(0, 0)) x = y = np.linspace(0, L, N) c = plt.contour(x, y, h[0], np.arange(90,100.1,0.2)) plt.clabel(c, fmt='%2.1f') plt.axis('scaled'); x = y = np.linspace(0, L, N) c = plt.contour(x, y, h[-1], np.arange(90,100.1,0.2)) plt.clabel(c, fmt='%1.1f') plt.axis('scaled'); z = np.linspace(-H/Nlay/2, -H+H/Nlay/2, Nlay) c = plt.contour(x, z, h[:,50,:], np.arange(90,100.1,.2)) plt.axis('scaled'); """ Explanation: Once the model has terminated normally, we can read the heads file. First, a link to the heads file is created with HeadFile. The link can then be accessed with the get_data function, by specifying, in this case, the step number and period number for which we want to retrieve data. A three-dimensional array is returned of size nlay, nrow, ncol. Matplotlib contouring functions are used to make contours of the layers or a cross-section. End of explanation """
NEONScience/NEON-Data-Skills
tutorials-in-development/Python/neon_api/neon_api_06_stacking_py.ipynb
agpl-3.0
import requests import json import pandas as pd SERVER = 'http://data.neonscience.org/api/v0/' SITECODE = 'TEAK' PRODUCTCODE = 'DP1.10003.001' """ Explanation: syncID: title: "Stacking and Joining NEON Data with Python" description: "" dateCreated: 2020-05-07 authors: Maxwell J. Burner contributors: Donal O'Leary estimatedTime: packagesLibraries: requests, json, pandas topics: api, data management, reshaping data languagesTool: python dataProduct: DP1.10003.001 code1: tutorialSeries: python-neon-api-series urlTitle: python-neon-api-06-stacking In this tutorial we will learn how to stack and join NEON data tables using the Pandas library. <div id="ds-objectives" markdown="1"> ### Objectives After completing this tutorial, you will be able to: * Combine NEON data tables from different sites and months using Pandas *concat* function * Describe the difference between inner joins, left outer joins, right outer joins, and full outer joins * Combine NEON data tables of different types using Pandas *merge* method ### Install Python Packages * **requests** * **json** * **pandas** </div> In this tutorial we will learn how to combine different two or more tables loaded as Pandas dataframes into one. The NEON API returns data in separate tables for each month and site. Furthermore, a data product package for one site and month usually includes mutiple tables, related to each other but storing different variables. As a result, the data we want for a particular study may be spread across multiple tables. In order to effectively manipulate our data in Python, we will usually want to combine these tables into one. When combining data tables of the same type, we often call this process stacking, since the rows of each table are "stacked" on the rows of the others. When combining tables of different types, we call this joining, and the observations in each table become a related set of observations. Fortunately, the Pandas package includes functions and methods for both stacking and joining data frames. Here we will discuss stacking data frames using a fairly simple concatenation method, and the more complex subject of joining dataframes. For our examples, we will again use breeding landbird count data from NEON's lower Teakettle site. End of explanation """ #Define a function that takes a 'data/' endpoint json and downloads a csv file from it based on provided strings #Input: A json object with results of a NEON 'data/' endpoint API call, two strings indicating file name #Output: A pandas dataframe def get_data(data_json, string1, string2 = ''): for file in data_json['data']['files']: if(string1 in file['name']): if(string2 in file['name']): return pd.read_csv(file['url']) print('No files matching name') return(0) #Request information on data june_req = requests.get(SERVER+'data/'+PRODUCTCODE+'/'+SITECODE+'/'+'2019-06') july_req = requests.get(SERVER+'data/'+PRODUCTCODE+'/'+SITECODE+'/'+'2019-07') june_json = june_req.json() july_json = july_req.json() #Read in basic bird count data for June and July 2019 of lower Teakettle, using the function we defined. df_june_count = get_data(june_json, 'countdata', 'basic') df_july_count = get_data(july_json, 'countdata', 'basic') #View first three rows of June data df_june_count.head(3) #View first three rows of July count data df_july_count.head(3) #View shape of each count data table print('June rows and columns: ',df_june_count.shape) print('July rows and columns: ',df_july_count.shape) """ Explanation: Stacking tables As we have seen, NEON stores data of the same type in seperate tables by site and month. But what if we want data spanning multiple months, or from multiple sampling sites? If the data tables are of the same 'type' - such as basic package bird count data - they will have the same columns and headers, so we can simply combine rows from multiple tables into one. From our first tutorial, we know that bird count data taken at Lower Teakettle is available for several months, including both June and July of 2019. For our example here, we will merge the basic bird count data tables for those two months into one. End of explanation """ #Combine data frames along index df_concatenated = pd.concat((df_june_count, df_july_count)) print(df_concatenated.shape) """ Explanation: The two dataframes have matching number of columns, column names, and column data types. We can directly combine them into one child table containing the rows of both parent tables using the Pandas concat function. This takes a list, tuple, or other sequence of data frames, and tries to combine them into one. The concat function can combine rows, concatenating along the row indices, or it can combine columns, concatenating along the header. Which approach is used is determined by the axis parameter; 0 for rows, 1 for columns. The default is 0 for rows, so in this case we don't have to specify. End of explanation """ #Import basic package per point data into Python df_june_point = get_data(june_json, 'perpoint', 'basic') #View shape of new dataframe print(df_june_point.shape) df_june_point.head(3) #View columns of count data table print('Count data') print(df_june_count.dtypes) print('\nPoint Data') print(df_june_point.dtypes) """ Explanation: The new dataframe has 1770 rows, the sum of the 1484 rows from the June data and the 286 rows from the July data. Joining Tables At other times, the data we want may be spread across two or more related tables, each with different attributes. In this case we will want to join the tables together. Joining tables is an important concept in data science and analysis. It requires that we provide a join predicate, so Python knows whether a row in one table should be associated with row(s) in the second table. This is often done by specifying a unique ID column (a key) that tie the observations or data, and this key must be found in both tables. Several different kinds of joins exist, depending on the relationship between the tables and how we want to handle unmatched data. The breeding land bird count data comes with several tables, including the 'Count Data' table (which contains all of the different individual bird observations and their taxonomic information) and the 'Point Data' table (which contains information about the point where the birds were observed (spatial information, vegetation type, etc.) as well as the observation conditions for the day (cloud cover, wind speed, relative humidity, etc.). In order to match up every bird observation with the location and conditions in which it was observed, we will need to join these two tables together. For this example, we will download the per point data for June as a dataframe, and join it with the count data for June. End of explanation """ #Print first ten rows of eventID column for each table for i in range(10): print('Point: ',df_june_count['eventID'][i],'\tCount: ' ,df_june_point['eventID'][i]) """ Explanation: Notice that the countdata table and the perpoint table share a number of common columns/variables/attributes. We say that a relationship exists between these tables; data in a certain column of one table corresponds to data in a certain column of the other table. To join two tables, we select a pair of related columns from the tables to serve as the keys in the join. One of the columns present in both tables is 'eventID'. Do the values of this column in one table appear to correspond to those in the other table? End of explanation """ #Compare shape of whole eventID column to shape of eventID column after removing repeated values. print(df_june_point['eventID'].shape) print(pd.unique(df_june_point['eventID']).shape) """ Explanation: It appears that it should be possible to match values between the two eventID columns; in fact, the two columns have identical first entries. If we are using just one key pair (e.g. df_june_count['eventID'] paired with df_june_point['eventID']), we ideally want there to be no repeated values in the key column of the table with fewer rows. This way, every row in the longer table matches to no more than one row in the shorter table. Here the shorter table is the perpoint data; let's see if the eventID attribute has only unique values. End of explanation """ #View columns of June count data table print(df_june_count.dtypes) #View columns of June per point data table df_june_point.dtypes """ Explanation: It looks like there are no repeat values in the eventID column of the per point table. Combined with the fact that the eventID values in the count data table correspond to eventID values in the per point table, we can use this as a key. Our join will use every row of the longer count data table only once, and will duplicate rows of the shorter per point data table as necessary so that the number of rows match. Our final table will have the same number of rows as the per point table. The next question is what kind of join we want; this determines how the join algorithm deals with rows in one table that lack a matching row in the other. If we perform an Inner Join, any rows from either table that lack a match in the other will be left out of the final table; if all but three rows of the longer table have matching rows in the shorter table, then final table will have three rows fewer than the longer parent table. <figure> <img src = https://d33wubrfki0l68.cloudfront.net/3abea0b730526c3f053a3838953c35a0ccbe8980/7f29b/diagrams/join-inner.png> <figcaption>Source: R for Data Science. Image licensed under Creative Commons. <a href="https://r4ds.had.co.nz/">Credit: Wickham and Grolemund, R for Data Science</a> </figcaption> </figure> If we instead perform an Outer Join, then rows that lack a match will be allowed to stay in the final table; the added columns will be "padded" with NA values in the output table. If we only keep unmatched row from the left table it is a Left Outer Join or Left Join; if we only keep unmatched rows from the right table it is a Right Outer Join or Right Join; and if we keep unmatched rows from both tables it is a Full Outer Join or Full Join. <figure> <img src = https://d33wubrfki0l68.cloudfront.net/9c12ca9e12ed26a7c5d2aa08e36d2ac4fb593f1e/79980/diagrams/join-outer.png> <figcaption>Source: R for Data Science. Image licensed under Creative Commons. <a href="https://r4ds.had.co.nz/">Credit: Wickham and Grolemund, R for Data Science</a></figcaption> </figure> In this example we will use an inner join, as we don't want any rows with missing values in the key fields. The final table will have the same number of rows as the count data table, the longer table, minus any rows that didn't have a match in the point data table. The next question is which columns from each table we want to include in the final table. The per point data table has quite a few columns that are related to or identical to columns in the count data table; bringing these along would add redundant data. End of explanation """ #Create a list of columns to keep; columns only present in the point data, plus eventID good_columns = [] for column in df_june_point: if(column not in df_june_count.columns): good_columns.append(column) good_columns.append('eventID') print(good_columns) """ Explanation: The two dataframes share the columns 'namedLocation', 'domainID', 'siteID', 'plotID', 'plotType', 'pointID', 'startDate', and 'eventID'. Both tables also have a 'uid' column, but these are just values used to index each row in a table; the 'uid' values in the count data table are not related to the 'uid' values in the point data table. Our merge will bring over values from all of the point data table column that aren't already present in the point data table. We also won't bring over 'uid' values from the point data table; these would not be of any use in the merged table. End of explanation """ #Prepare point dataframe for joining by narrowing to only the desired columns df_right = df_june_point.filter(items = good_columns, axis = 1) #Join data frames df_joined = df_june_count.merge(df_right, how = 'inner', left_on = 'eventID', right_on = 'eventID') print(df_joined.columns) """ Explanation: The Pandas package offers a couple of different methods for joining data frames. In this case we will use the .merge method of dataframes. This method is called on one 'left' dataframe, and is passed another 'right' dataframe, along with further parameters indicating how the join is to occur: left_on and right_on indicate the column from the left and right dataframes to be used as keys respectively, and the how parameter is given a string 'inner' (default), 'left', 'right', or 'full' that determines the type of join performed. You can read the Pandas documentation for pandas.Dataframe.merge here (pandas.pydata.org). Pandas also offers a .join method for the same purpose; that method has fewer parameters, but assumes that the key column in the right table is that table's index. Now we make the join. First we will use the .filter method and the list we prepared to get a version of the point data table that only contains the desired columns. Then we call .merge on the count data, and pass the prepared data table. End of explanation """ print(df_june_count.shape) print(df_joined.shape) """ Explanation: Finally, let's check the shape of the joined data table against the original count data table. End of explanation """ pd.set_option('max_columns', 40) #show all columns df_joined.head(3) """ Explanation: The number of rows is the same in each, so it appears that every entry in the count data was successfully mapped to an entry in the point data. Finally, let's take a look at the new joined dataframe to check our work. End of explanation """
thushear/MLInAction
kaggle/titanic_sklearn.ipynb
apache-2.0
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline data_train = pd.read_csv('./input/titanic/train.csv') data_test = pd.read_csv('./input/titanic/test.csv') data_train.sample(20) """ Explanation: 熟悉Pandas Sklearn CSV to DataFrame End of explanation """ sns.barplot(x='Embarked',y='Survived',hue='Sex',data=data_train) sns.pointplot(x='Pclass',y='Survived',hue='Sex',data=data_train,palette={'male':'blue','female':'pink'},markers=['*','o'],linestyles=['--','-']) """ Explanation: 可视化数据对于识别模型中潜在的模式十分重要 End of explanation """ data_train.Fare.describe() data_train.Sex.describe() data_train.Name.describe() def simplify_ages(df): print('_'*10) print(df.Age.head(10)) df.Age = df.Age.fillna(-0.5) print('_'*10) print(df.Age.head(10)) bins = (-1,0,5,12,18,25,35,60,120) group_names = ['Unknown','Baby','Child','Teenager','Student','Young adult','Adult','Senior'] categories = pd.cut(df.Age,bins,labels=group_names) df.Age = categories print('_'*10) print(df.Age.head(10)) return df def simplify_cabins(df): df.Cabin = df.Cabin.fillna('N') df.Cabin = df.Cabin.apply(lambda x:x[0]) return df def simplify_fares(df): df.Fare = df.Fare.fillna(-0.5) bins = (-1,0,8,15,31,1000) group_names = ['Unknown','1_quartile','2_quartile','3_quartile','4_quartile'] categories = pd.cut(df.Fare,bins,labels=group_names) df.Fare = categories return df def format_name(df): df['Lname'] = df.Name.apply(lambda x:x.split(' ')[0]) df['NamePrefix'] = df.Name.apply(lambda x:x.split(' ')[1]) return df def drop_features(df): return df.drop(['Ticket','Name','Embarked'],axis=1) def transform_features(df): df = simplify_ages(df) df = simplify_cabins(df) df = simplify_fares(df) df = format_name(df) df = drop_features(df) return df data_train = transform_features(data_train) data_test = transform_features(data_test) print('='*20) data_train.sample(20) sns.barplot(x='Age',y='Survived',hue='Sex',data=data_train) sns.barplot(x='Cabin',y='Survived',hue='Sex',data=data_train) sns.barplot(x='Fare',y='Survived',hue='Sex',data=data_train) """ Explanation: 特征转换 除了'sex'特征之外,'age'是其次重要的特征,如果按照数据集中age的原始值来搞显然太离散了容易降低泛化能力导致过拟合,所以需要处理age将people划分到不同的年龄段组成的组中 Cabin特征每行记录都是以一个字母开头,显然第一个字母比后边的数字的更重要,所以把第一个字母单独抽取出来作为特征 Fare是另一个特征值连续的特征,需要简化,通过data_train.Fare.describe()获取特征的分布, 从name特征中抽取信息而不是使用全名,抽取last name 和 name 前缀称谓(Mr Mrs )然后拼起来作为新的特征 最后丢弃掉没有太大用处的特征(比如Ticket Name) End of explanation """ from sklearn import preprocessing def encode_features(df_train,df_test): features = ['Fare'] """ Explanation: 特征处理的最后阶段 特征预处理的最后阶段是对标签型的数据标准化,skLearn里的LabelEncoder可以将唯一的string值转换成number数值,把数据变得对于各种算法来说更灵活可用.结果是对于人类而言不是太友好,但是对于机器刚刚好的一堆数值. End of explanation """
ddtm/dl-course
Seminar4/bonus/Bonus-advanced-cnn.ipynb
mit
import numpy as np from cifar import load_cifar10 X_train,y_train,X_val,y_val,X_test,y_test = load_cifar10("cifar_data") class_names = np.array(['airplane','automobile ','bird ','cat ','deer ','dog ','frog ','horse ','ship ','truck']) print X_train.shape,y_train.shape import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=[12,10]) for i in range(12): plt.subplot(3,4,i+1) plt.xlabel(class_names[y_train[i]]) plt.imshow(np.transpose(X_train[i],[1,2,0])) """ Explanation: Deep learning for computer vision got no lasagne? Install the bleeding edge version from here: http://lasagne.readthedocs.org/en/latest/user/installation.html Main task This week, we shall focus on the image recognition problem on cifar10 dataset * 60k images of shape 3x32x32 * 10 different classes: planes, dogs, cats, trucks, etc. End of explanation """ import lasagne import theano import theano.tensor as T input_X = T.tensor4("X") #input dimention (None means "Arbitrary") input_shape = [None,3,32,32] target_y = T.vector("target Y integer",dtype='int32') """ Explanation: lasagne lasagne is a library for neural network building and training it's a low-level library with almost seamless integration with theano End of explanation """ #Input layer (auxilary) input_layer = lasagne.layers.InputLayer(shape = input_shape,input_var=input_X) #fully connected layer, that takes input layer and applies 50 neurons to it. # nonlinearity here is sigmoid as in logistic regression # you can give a name to each layer (optional) dense_1 = lasagne.layers.DenseLayer(input_layer,num_units=100, nonlinearity = lasagne.nonlinearities.sigmoid, name = "hidden_dense_layer") #fully connected output layer that takes dense_1 as input and has 10 neurons (1 for each digit) #We use softmax nonlinearity to make probabilities add up to 1 dense_output = lasagne.layers.DenseLayer(dense_1,num_units = 10, nonlinearity = lasagne.nonlinearities.softmax, name='output') #network prediction (theano-transformation) y_predicted = lasagne.layers.get_output(dense_output) #all network weights (shared variables) all_weights = lasagne.layers.get_all_params(dense_output,trainable=True) print all_weights """ Explanation: Defining network architecture End of explanation """ #Mean categorical crossentropy as a loss function - similar to logistic loss but for multiclass targets loss = lasagne.objectives.categorical_crossentropy(y_predicted,target_y).mean() #prediction accuracy (WITH dropout) accuracy = lasagne.objectives.categorical_accuracy(y_predicted,target_y).mean() #This function computes gradient AND composes weight updates just like you did earlier updates_sgd = lasagne.updates.sgd(loss, all_weights,learning_rate=0.01) #function that computes loss and updates weights train_fun = theano.function([input_X,target_y],[loss,accuracy],updates= updates_sgd) #deterministic prediciton (without dropout) y_predicted_det = lasagne.layers.get_output(dense_output,deterministic=True) #prediction accuracy (without dropout) accuracy_det = lasagne.objectives.categorical_accuracy(y_predicted_det,target_y).mean() #function that just computes accuracy without dropout/noize -- for evaluation purposes accuracy_fun = theano.function([input_X,target_y],accuracy_det) """ Explanation: Than you could simply define loss function manually compute error gradient over all weights define updates But that's a whole lot of work and life's short not to mention life's too short to wait for SGD to converge Instead, we shall use Lasagne builtins End of explanation """ # An auxilary function that returns mini-batches for neural network training #Parameters # X - a tensor of images with shape (many, 3, 32, 32), e.g. X_train # y - a vector of answers for corresponding images e.g. Y_train #batch_size - a single number - the intended size of each batches #What do need to implement # 1) Shuffle data # - Gotta shuffle X and y the same way not to break the correspondence between X_i and y_i # 3) Split data into minibatches of batch_size # - If data size is not a multiple of batch_size, make one last batch smaller. # 4) return a list (or an iterator) of pairs # - (подгруппа картинок, ответы из y на эту подгруппу) def iterate_minibatches(X, y, batchsize): <return an iterable of (X_batch, y_batch) batches of images and answers for them> # # # # # # # # # # # # # # # # # # # # # # # # You feel lost and wish you stayed home tonight? # Go search for a similar function at # https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py """ Explanation: That's all, now let's train it! We got a lot of data, so it's recommended that you use SGD So let's implement a function that splits the training sample into minibatches End of explanation """ import time num_epochs = 100 #amount of passes through the data batch_size = 50 #number of samples processed at each function call for epoch in range(num_epochs): # In each epoch, we do a full pass over the training data: train_err = 0 train_acc = 0 train_batches = 0 start_time = time.time() for batch in iterate_minibatches(X_train, y_train,batch_size): inputs, targets = batch train_err_batch, train_acc_batch= train_fun(inputs, targets) train_err += train_err_batch train_acc += train_acc_batch train_batches += 1 # And a full pass over the validation data: val_acc = 0 val_batches = 0 for batch in iterate_minibatches(X_val, y_val, batch_size): inputs, targets = batch val_acc += accuracy_fun(inputs, targets) val_batches += 1 # Then we print the results for this epoch: print("Epoch {} of {} took {:.3f}s".format( epoch + 1, num_epochs, time.time() - start_time)) print(" training loss (in-iteration):\t\t{:.6f}".format(train_err / train_batches)) print(" train accuracy:\t\t{:.2f} %".format( train_acc / train_batches * 100)) print(" validation accuracy:\t\t{:.2f} %".format( val_acc / val_batches * 100)) test_acc = 0 test_batches = 0 for batch in iterate_minibatches(X_test, y_test, 500): inputs, targets = batch acc = accuracy_fun(inputs, targets) test_acc += acc test_batches += 1 print("Final results:") print(" test accuracy:\t\t{:.2f} %".format( test_acc / test_batches * 100)) if test_acc / test_batches * 100 > 95: print "Double-check, than consider applying for NIPS'17. SRSly." elif test_acc / test_batches * 100 > 90: print "U'r freakin' amazin'!" elif test_acc / test_batches * 100 > 80: print "Achievement unlocked: 110lvl Warlock!" elif test_acc / test_batches * 100 > 70: print "Achievement unlocked: 80lvl Warlock!" elif test_acc / test_batches * 100 > 50: print "Achievement unlocked: 60lvl Warlock!" else: print "We need more magic!" """ Explanation: Training loop End of explanation """ import numpy as np from cifar import load_cifar10 X_train,y_train,X_val,y_val,X_test,y_test = load_cifar10("cifar_data") class_names = np.array(['airplane','automobile ','bird ','cat ','deer ','dog ','frog ','horse ','ship ','truck']) print X_train.shape,y_train.shape import lasagne input_X = T.tensor4("X") #input dimention (None means "Arbitrary" and only works at the first axes [samples]) input_shape = [None,3,32,32] target_y = T.vector("target Y integer",dtype='int32') #Input layer (auxilary) input_layer = lasagne.layers.InputLayer(shape = input_shape,input_var=input_X) <student.code_neural_network_architecture()> dense_output = <your network output> # Network predictions (theano-transformation) y_predicted = lasagne.layers.get_output(dense_output) #All weights (shared-varaibles) # "trainable" flag means not to return auxilary params like batch mean (for batch normalization) all_weights = lasagne.layers.get_all_params(dense_output,trainable=True) print all_weights #loss function loss = <loss function> #<optionally add regularization> #accuracy with dropout/noize accuracy = lasagne.objectives.categorical_accuracy(y_predicted,target_y).mean() #weight updates updates = <try different update methods> #A function that accepts X and y, returns loss functions and performs weight updates train_fun = theano.function([input_X,target_y],[loss,accuracy],updates= updates_sgd) #deterministic prediciton (without dropout) y_predicted_det = lasagne.layers.get_output(dense_output) #prediction accuracy (without dropout) accuracy_det = lasagne.objectives.categorical_accuracy(y_predicted_det,target_y).mean() #function that just computes accuracy without dropout/noize -- for evaluation purposes accuracy_fun = theano.function([input_X,target_y],accuracy_det) #итерации обучения num_epochs = <how many times to iterate over the entire training set> batch_size = <how many samples are processed at a single function call> for epoch in range(num_epochs): # In each epoch, we do a full pass over the training data: train_err = 0 train_acc = 0 train_batches = 0 start_time = time.time() for batch in iterate_minibatches(X_train, y_train,batch_size): inputs, targets = batch train_err_batch, train_acc_batch= train_fun(inputs, targets) train_err += train_err_batch train_acc += train_acc_batch train_batches += 1 # And a full pass over the validation data: val_acc = 0 val_batches = 0 for batch in iterate_minibatches(X_val, y_val, batch_size): inputs, targets = batch val_acc += accuracy_fun(inputs, targets) val_batches += 1 # Then we print the results for this epoch: print("Epoch {} of {} took {:.3f}s".format( epoch + 1, num_epochs, time.time() - start_time)) print(" training loss (in-iteration):\t\t{:.6f}".format(train_err / train_batches)) print(" train accuracy:\t\t{:.2f} %".format( train_acc / train_batches * 100)) print(" validation accuracy:\t\t{:.2f} %".format( val_acc / val_batches * 100)) test_acc = 0 test_batches = 0 for batch in iterate_minibatches(X_test, y_test, 500): inputs, targets = batch acc = accuracy_fun(inputs, targets) test_acc += acc test_batches += 1 print("Final results:") print(" test accuracy:\t\t{:.2f} %".format( test_acc / test_batches * 100)) if test_acc / test_batches * 100 > 80: print "Achievement unlocked: 80lvl Warlock!" else: print "We need more magic!" """ Explanation: First step Let's create a mini-convolutional network with roughly such architecture: * Input layer * 3x3 convolution with 10 filters and ReLU activation * 3x3 pooling (or set previous convolution stride to 3) * Dense layer with 100-neurons and ReLU activation * 10% dropout * Output dense layer. Train it with Adam optimizer with default params. Second step Add batch_norm (with default params) between convolution and pooling Re-train the network with the same optimizer Quest For A Better Network (please read it at least diagonally) The ultimate quest is to create a network that has as high accuracy as you can push it. There is a mini-report at the end that you will have to fill in. We recommend reading it first and filling it while you iterate. Grading starting at zero points +2 for describing your iteration path in a report below. +2 for building a network that gets above 20% accuracy +1 for beating each of these milestones on TEST dataset: 50% (5 total) 60% (6 total) 65% (7 total) 70% (8 total) 75% (9 total) 80% (10 total) Bonus points Common ways to get bonus points are: * Get higher score, obviously. * Anything special about your NN. For example "A super-small/fast NN that gets 80%" gets a bonus. * Any detailed analysis of the results. (saliency maps, whatever) Restrictions Please do NOT use pre-trained networks for this assignment until you reach 80%. In other words, base milestones must be beaten without pre-trained nets (and such net must be present in the e-mail). After that, you can use whatever you want. you can use validation data for training, but you can't' do anything with test data apart from running the evaluation procedure. Tips on what can be done: Network size MOAR neurons, MOAR layers, (lasagne docs) Nonlinearities in the hidden layers tanh, relu, leaky relu, etc Larger networks may take more epochs to train, so don't discard your net just because it could didn't beat the baseline in 5 epochs. Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn! Convolution layers they are a must unless you have any super-ideas network = lasagne.layers.Conv2DLayer(prev_layer, num_filters = n_neurons, filter_size = (filter width, filter height), nonlinearity = some_nonlinearity) Warning! Training convolutional networks can take long without GPU. That's okay. If you are CPU-only, we still recomment to try a simple convolutional architecture a perfect option is if you can set it up to run at nighttime and check it up at the morning. Make reasonable layer size estimates. A 128-neuron first convolution is likely an overkill. To reduce computation time by a factor in exchange for some accuracy drop, try using stride parameter. A stride=2 convolution should take roughly 1/4 of the default (stride=1) one. Plenty other layers and architectures http://lasagne.readthedocs.org/en/latest/modules/layers.html batch normalization, pooling, etc Early Stopping Training for 100 epochs regardless of anything is probably a bad idea. Some networks converge over 5 epochs, others - over 500. Way to go: stop when validation score is 10 iterations past maximum Faster optimization - rmsprop, nesterov_momentum, adam, adagrad and so on. Converge faster and sometimes reach better optima It might make sense to tweak learning rate/momentum, other learning parameters, batch size and number of epochs BatchNormalization (lasagne.layers.batch_norm) FTW! Regularize to prevent overfitting Add some L2 weight norm to the loss function, theano will do the rest Can be done manually or via - http://lasagne.readthedocs.org/en/latest/modules/regularization.html Dropout - to prevent overfitting lasagne.layers.DropoutLayer(prev_layer, p=probability_to_zero_out) Don't overdo it. Check if it actually makes your network better Data augmemntation - getting 5x as large dataset for free is a great deal Zoom-in+slice = move Rotate+zoom(to remove black stripes) any other perturbations Add Noize (easiest: GaussianNoizeLayer) Simple way to do that (if you have PIL/Image): from scipy.misc import imrotate,imresize and a few slicing Stay realistic. There's usually no point in flipping dogs upside down as that is not the way you usually see them. There is a template for your solution below that you can opt to use or throw away and write it your way End of explanation """
neurodata/ndmg
tutorials/Qa_skullstrip.ipynb
apache-2.0
#import packages import warnings warnings.simplefilter("ignore") import sys import nibabel as nib import numpy as np import os from PIL import Image, ImageDraw,ImageFont import matplotlib.pyplot as plt from m2g.stats.qa_skullstrip import gen_overlay_pngs """ Explanation: Tutorial for QA of Skull Strip This tutorial is designed to illustrate the QA code used to generate the figure for the stripped brain and skull. This code will do 3 main things: Input original t1w file and the skull-striped brain file Shows the skull-stripped brain (green) overlaid on the original t1w (magenta) save the image into output directory So that we can open the QA image to see if the skull strip result is good or not. Import packages qa_skullstrip uses libraries and functions from nibabel, numpy, and m2g End of explanation """ def gen_overlay_pngs( brain, original, outdir, loc=0, mean=False, minthr=2, maxthr=95, edge=False): """Generate a QA image for skullstrip. will call the function plot_overlays_skullstrip Parameters ---------- brain: nifti file Path to the skull-stripped nifti brain original: nifti file Path to the original t1w brain, with the skull included outdir: str Path to the directory where QA will be saved loc: int which dimension of the 4d brain data to use mean: bool whether to calculate the mean of the 4d brain data If False, the loc=0 dimension of the data (mri_data[:, :, :, loc]) is used minthr: int lower percentile threshold maxthr: int upper percentile threshold edge: bool whether to use normalized luminance data If None, the respective min and max of the color array is used. """ original_name = get_filename(original) brain_data = nb.load(brain).get_data() if brain_data.ndim == 4: # 4d data, so we need to reduce a dimension if mean: brain_data = brain_data.mean(axis=3) else: brain_data = brain_data[:, :, :, loc] fig = plot_overlays_skullstrip(brain_data, original) # name and save the file fig.savefig(f"{outdir}/qa_skullstrip__{original_name}.png", format="png") """ Explanation: gen_overlay_pngs The skullstrip qa images are created using the function gen_overlay_pngs, which will call the function plot_overlays_skullstrip End of explanation """ def plot_overlays_skullstrip(brain, original, cmaps=None, minthr=2, maxthr=95, edge=False): """Shows the skull-stripped brain (green) overlaid on the original t1w (magenta) Parameter --------- brain: str, nifti image, numpy.ndarray an object to open the data for a skull-stripped brain. Can be a string (path to a brain file), nibabel.nifti1.nifti1image, or a numpy.ndarray. original: str, nifti image, numpy.ndarray an object to open the data for t1w brain, with the skull included. Can be a string (path to a brain file), nibabel.nifti1.nifti1image, or a numpy.ndarray. cmaps: matplotlib colormap objects colormap objects based on lookup tables using linear segments. minthr: int lower percentile threshold maxthr: int upper percentile threshold edge: bool whether to use normalized luminance data If None, the respective min and max of the color array is used. Returns --------- foverlay: matplotlib.figure.Figure """ plt.rcParams.update({"axes.labelsize": "x-large", "axes.titlesize": "x-large"}) foverlay = plt.figure() original = get_braindata(original) brain_shape = get_braindata(brain).shape brain = get_braindata(brain) if original.shape != brain.shape: raise ValueError("Two files are not the same shape.") brain = pad_im(brain, max(brain_shape[0:3]), pad_val=0, rgb=False) original = pad_im(original,max(brain_shape[0:3]), pad_val=0, rgb=False) if cmaps is None: cmap1 = LinearSegmentedColormap.from_list("mycmap1", ["white", "magenta"]) cmap2 = LinearSegmentedColormap.from_list("mycmap2", ["white", "green"]) cmaps = [cmap1, cmap2] x, y, z = get_true_volume(brain) coords = (x, y, z) labs = [ "Sagittal Slice", "Coronal Slice", "Axial Slice", ] var = ["X", "Y", "Z"] # create subplot for first slice # and customize all labels idx = 0 if edge: min_val = 0 max_val = 1 else: min_val, max_val = get_min_max(brain, minthr, maxthr) for i, coord in enumerate(coords): for pos in coord: idx += 1 ax = foverlay.add_subplot(3, 3, idx) ax.set_title(var[i] + " = " + str(pos)) if i == 0: image = ndimage.rotate(brain[pos, :, :], 90) atl = ndimage.rotate(original[pos, :, :], 90) elif i == 1: image = ndimage.rotate(brain[:, pos, :], 90) atl = ndimage.rotate(original[:, pos, :], 90) else: image = ndimage.rotate(brain[:, :, pos], 0) atl = ndimage.rotate(original[:, :, pos], 0) if idx % 3 == 1: ax.set_ylabel(labs[i]) ax.yaxis.set_ticks([0, image.shape[0] / 2, image.shape[0] - 1]) ax.xaxis.set_ticks([0, image.shape[1] / 2, image.shape[1] - 1]) if edge: image = edge_map(image).data image[image > 0] = max_val image[image == 0] = min_val # Set the axis invisible plt.xticks([]) plt.yticks([]) # Set the frame invisible ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) ax.imshow(atl, interpolation="none", cmap=cmaps[0], alpha=0.9) ax.imshow( opaque_colorscale( cmaps[1], image, alpha=0.9, vmin=min_val, vmax=max_val ) ) if idx ==3: plt.plot(0, 0, "-", c="magenta", label='skull') plt.plot(0, 0, "-", c="green", label='brain') # box = ax.get_position() # ax.set_position([box.x0, box.y0, box.width, box.height*0.8]) plt.legend(loc='best', fontsize=15, frameon=False, bbox_to_anchor=(1.5, 1.5)) # Set title for the whole picture a, b, c = brain_shape title = 'Skullstrip QA. Scan Volume : ' + str(a) + '*' + str(b) + '*' + str(c) foverlay.suptitle(title, fontsize=24) foverlay.set_size_inches(12.5, 10.5, forward=True) return foverlay """ Explanation: plot_overlays_skullstrip Skull-strip qa calls plot_overlays_skullstrip which shows the skull-stripped brain (green) overlaid on the original t1w (magenta) End of explanation """ original = r'/mnt/f/JHU/ndd/dataset/part_of_SWU4/sub-0025629_ses-1_T1w.nii.gz' outdir = r'/mnt/f/JHU/ndd/dataset/output1/sub-0025864/ses-1/' """ Explanation: Inputs original is the path to the original t1w brain, with the skull included outdir is the path to the directory where QA image will be saved you can change it to your own path End of explanation """ brainfile = f"{outdir}only_brain.nii.gz" cmd = f"3dSkullStrip -prefix {brainfile} -input {original} -ld 30" os.system(cmd) """ Explanation: Run AFNI 3dSkullStrip to do skull strip Use AFNI 3dSkullStrip to do skull strip for the original t1w brain End of explanation """ %matplotlib inline gen_overlay_pngs(brainfile, original,outdir, loc=0, mean=False, minthr=2, maxthr=95, edge=False) """ Explanation: Run qa_skullstrip.py Call the function gen_overlay_pngs End of explanation """
synthicity/activitysim
activitysim/examples/example_estimation/notebooks/09_school_tour_scheduling.ipynb
agpl-3.0
import os import larch # !conda install larch -c conda-forge # for estimation import pandas as pd """ Explanation: Estimating School Tour Scheduling This notebook illustrates how to re-estimate the mandatory tour scheduling component for ActivitySim. This process includes running ActivitySim in estimation mode to read household travel survey files and write out the estimation data bundles used in this notebook. To review how to do so, please visit the other notebooks in this directory. Load libraries End of explanation """ os.chdir('test') """ Explanation: We'll work in our test directory, where ActivitySim has saved the estimation data bundles. End of explanation """ modelname = "mandatory_tour_scheduling_school" from activitysim.estimation.larch import component_model model, data = component_model(modelname, return_data=True) """ Explanation: Load data and prep model for estimation End of explanation """ data.coefficients """ Explanation: Review data loaded from the EDB The next (optional) step is to review the EDB, including the coefficients, utilities specification, and chooser and alternative data. Coefficients End of explanation """ data.spec """ Explanation: Utility specification End of explanation """ data.chooser_data """ Explanation: Chooser data End of explanation """ data.alt_values """ Explanation: Alternatives data End of explanation """ model.estimate() """ Explanation: Estimate With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters. End of explanation """ model.parameter_summary() """ Explanation: Estimated coefficients End of explanation """ from activitysim.estimation.larch import update_coefficients result_dir = data.edb_directory/"estimated" update_coefficients( model, data, result_dir, output_file=f"{modelname}_coefficients_revised.csv", ); """ Explanation: Output Estimation Results End of explanation """ model.to_xlsx( result_dir/f"{modelname}_model_estimation.xlsx", data_statistics=False, ) """ Explanation: Write the model estimation report, including coefficient t-statistic and log likelihood End of explanation """ pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv") """ Explanation: Next Steps The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode. End of explanation """
grcanosa/code-playground
scrum/pandasCSV/csvRedminePandas1.ipynb
mit
from IPython.display import HTML from IPython.display import display HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> <form action="javascript:code_toggle()"><input type="submit" value="Click aquí para ver el código"></form>''') ## CONFIGURACION SPRINT_ACTUAL = "export9.csv" SPRINT_ANTERIOR = "export8.csv" #DATE_START = "20170515" #DATE_END = "20170526" #DATES_HOLIDAYS = [] print("CONFIGURACIÓN") print("SPRINT ACTUAL: "+SPRINT_ACTUAL) print("SPRINT ANTERIOR: "+SPRINT_ANTERIOR) ## IMPORT AND LOADING import pandas as pd import math col_names = ["#","Estado","Proyecto","Tipo","% Realizado","Tiempo estimado","Tarjeta","Versión prevista"] new_col_names = ["Issue","Estado","Proyecto","Tipo","Porcentaje","Tiempo","Tarjeta","Version"] col_names2 = ["#","Estado","Proyecto","Tipo","% Realizado","Tiempo estimado","Versión prevista"] new_col_names2 = ["Issue","Estado","Proyecto","Tipo","Porcentaje","Tiempo","Version"] grupos_tarjetas = {"SVV":"SVV", "TC":"TC", "MON":"SVE", "SVD":"SVE", "PC":"PC"} finished_states = ["Pte Validacion","Corregida","Resuelta"] blocked_states = ["Bloqueada"] special_types = ["Soporte","Unexpected"] dat = pd.read_csv(SPRINT_ACTUAL,sep=";",header=0,usecols=col_names,encoding="iso-8859-15",decimal=",") dat = dat.rename(columns=dict(zip(col_names,new_col_names))) datPrev = pd.read_csv(SPRINT_ANTERIOR,sep=";",header=0,usecols=col_names2,encoding="iso-8859-15",decimal=",") datPrev = datPrev.rename(columns=dict(zip(col_names2,new_col_names2))) dat.loc[dat["Issue"] == 14169] """ Explanation: Reporte Sprints End of explanation """ def check_bad_time(DF): if pd.isnull(DF["Tiempo"]).any() | (DF["Tiempo"] == 0).any(): display("Lista de peticiones con tiempo inválido o 0 (eliminadas de la lista a partir de ahora)") index = DF.loc[(pd.isnull(dat["Tiempo"])) | (DF["Tiempo"] == 0),:].index display(DF.loc[index,:]) DF.drop(index,inplace=True) else: display("Todas las peticiones tienen tiempos correctos") print("Sprint Actual: "+SPRINT_ACTUAL) check_bad_time(dat) print("Sprint Anterior: "+SPRINT_ANTERIOR) check_bad_time(datPrev) """ Explanation: Peticiones con errores Tiempo de la tarea End of explanation """ def check_inconsistent_state(DF): inconsistent = DF[(DF["Estado"].isin(finished_states)) & (DF["Porcentaje"] < 100)] if inconsistent.empty: print("Todas las peticiones tienen estado/porcentaje consistente") else: print("Lista de peticiones con inconsistencias en Estado/Porcentaje (se eliminan)") display(inconsistent) DF.drop(inconsistent.index,inplace=True) print("Sprint Actual: "+SPRINT_ACTUAL) check_inconsistent_state(dat) print("Sprint Anterior: "+SPRINT_ANTERIOR) check_inconsistent_state(datPrev) """ Explanation: Inconsistencias Estado/Porcentaje End of explanation """ def check_soporte(DF): soporte = DF[(DF["Tipo"] == "Soporte") & (~DF["Version"].str.contains("Soporte"))] if soporte.empty: print("Todas las tareas de soporte están en la sprint correcta") else: print("Lista de tareas de tipo soporte en sprint incorrecta (se mantienen)") display(soporte) print("Sprint Actual: "+SPRINT_ACTUAL) check_soporte(dat) print("Sprint Anterior: "+SPRINT_ANTERIOR) check_soporte(datPrev) """ Explanation: Inconsistencias Tipo Soporte / Sprint Soporte End of explanation """ def check_tarjeta(DF): tarjetaNaN = DF.loc[pd.isnull(DF["Tarjeta"])] if tarjetaNaN.empty: print("Todas las peticiones tienen la tarjeta asignada") else: print("Lista de peticiones con la tarjeta no asignada (se mantienen)") display(tarjetaNaN) print("Sprint Actual: "+SPRINT_ACTUAL) check_tarjeta(dat) """ Explanation: Inconsistencias en Tarjeta End of explanation """ #dat.groupby("Proyecto").loc["Tiempo","TiempoReal"].sum().reset_index() # for proy in dat.Proyecto.unique(): # print("Proyecto {:>20s} ==> {:>6.1f} horas".format(proy,dat.loc[dat["Proyecto"] == proy,"Tiempo"].sum())) dat["TiempoFinalizado"] = dat["Tiempo"]*dat["Porcentaje"]/100; dat["TiempoPendiente"] =dat["Tiempo"]*(100-dat["Porcentaje"])/100 datPrev["TiempoFinalizado"] = datPrev["Tiempo"]*datPrev["Porcentaje"]/100; datPrev["TiempoPendiente"] = datPrev["Tiempo"]*(100-datPrev["Porcentaje"])/100 # Se asigna la tarjeta correcta dat["Tarjeta"] = dat["Tarjeta"].map(grupos_tarjetas).fillna("UNK").astype(str) #display(dat) display(dat.groupby(["Proyecto","Version","Tarjeta"])["Tiempo","TiempoFinalizado","TiempoPendiente"].sum()) """ Explanation: Estadísticas por proyecto Se calculan las horas totales del proyecto, eliminando las tareas de tipo Soporte o Unexpected. TiempoFinalizado es el tiempo calculado a partir del porcentaje. TiempoPendiente es la diferencia entre el tiempo real y el finalizado. End of explanation """ display(dat.groupby(["Proyecto"])["Tiempo","TiempoPendiente"].sum()) """ Explanation: Suma de horas pendientes por Proyecto Se suman todos los tiempos pendientes de las distintas versiones (Backlog y Sprint). No se han quitado las tareas de soporte, pero estas deberían tener como TiempoPendiente 0 por lo que no afectarían al resultado. End of explanation """ dat.loc[dat["Tipo"] == "Soporte",["Tiempo","TiempoFinalizado"]].sum() """ Explanation: Suma de horas de Soporte End of explanation """ #Extraemos solo el proyecto y los tiempos de las tareas que no están en Backlog ni Soporte dat["TiempoSprintAnterior"] = dat["Issue"].map(datPrev.set_index("Issue")["TiempoFinalizado"]).fillna(0).astype(int) sprint=dat[~dat["Version"].str.contains('BackLog|Soporte|Backlog')] sprintT=sprint.groupby("Proyecto")["Tiempo","TiempoFinalizado","TiempoSprintAnterior"].sum() sprintT["TiempoFinalizadoReal"] = sprintT["TiempoFinalizado"]-sprintT["TiempoSprintAnterior"] display(sprintT) display("Tiempos Totales:") display(sprintT.sum(numeric_only=True)) """ Explanation: Sprint Actual Los datos siguientes se refieren al sprint actual. Horas realizadas, teniendo en cuenta las horas realizadas hasta el sprint anterior. End of explanation """ print("Tiempo total realizado en este sprint:") print("Tiempo tareas del Sprint: ") tSprint = sprintT.sum(numeric_only=True)["TiempoFinalizadoReal"] display(tSprint) print("Tiempo tareas Soporte: ") tSoporte = dat.loc[dat["Tipo"] == "Soporte",["Tiempo","TiempoFinalizado"]].sum()["TiempoFinalizado"] display(tSoporte) print("Tiempo Total: ") display(tSprint+tSoporte) """ Explanation: Resumen Sprint End of explanation """
DaveBackus/Data_Bootcamp
Code/IPython/bootcamp_indicators.ipynb
mit
# import packages import pandas as pd # data management import matplotlib.pyplot as plt # graphics import numpy as np # numerical calculations # IPython command, puts plots in notebook %matplotlib inline # check Python version import datetime as dt import sys print('Today is', dt.date.today()) print('What version of Python are we running? \n', sys.version, sep='') """ Explanation: Data Bootcamp: Economic indicators We explore two kinds of economic indicators: Business cycle indicators. Here we look at a number of different measures of current economic activity. We do this for the US, but similar methods are used in other countries. We use monthly indicators, which give us a picture of the current state of the economy, and generally do it more quickly than waiting for quarterly data on aggregate GDP. Country indicators. Here the goal is to assess the economic and business climate in a country for a specific business opportunity. Should we locate a business analytics startup in Barcelona, Paris, or Stockholm? Should we open a new factory in China, Thailand, or Vietnam? Should we expand a meat production and retail operation beyond Zambia to other countries in southern Africa? This IPython notebook was created by Dave Backus for the NYU Stern course Data Bootcamp. Preliminaries Import packages and check code versions. End of explanation """ # get data from FRED import pandas as pd import pandas.io.data as web # web interface with FRED import datetime as dt # handles dates # get data indicators = ['INDPRO', 'PAYEMS', 'AWHMAN', 'PERMIT', 'NAPM', 'RSXFS'] start_date = dt.datetime(1970, 1, 1) inds = web.DataReader(indicators, "fred", start_date) end = inds.index[-1] # yoy growth rates g = inds.pct_change(periods=12).dropna() # standardize gs = (g - g.mean()) / g.std() # correlations gs.corr() # plot fig, ax = plt.subplots() gs.plot(ax=ax) ax.set_title('Economic indicators', fontsize=14, loc='left') ax.set_ylabel('Standard deviations from mean') ax.set_xlabel('') ax.hlines(y=0, xmin=start_date, xmax=end, linestyles='dashed') #ax.legend().set_visible(False) ax.legend(loc='upper left', fontsize=10, handlelength=2, labelspacing=0.15) # focus on recent past recent_date = dt.datetime(2005, 1, 1) grecent= gs[gs.index>=recent_date] fig, ax = plt.subplots() grecent.plot(ax=ax) ax.set_title('Zoom in on recent past', fontsize=14, loc='left') ax.set_ylabel('Standard deviations from mean') ax.set_xlabel('') ax.hlines(y=0, xmin=recent_date, xmax=end, linestyles='dashed') ax.legend(loc='upper left', fontsize=10, handlelength=2, labelspacing=0.15) """ Explanation: Business cycle indicators We assess the state of the US economy with a collection of monthly indicators that (mostly) move up and down with the economy. We get the data from FRED, the St Louis Fed's popular data collection. There are lots of indicators to choose from, but we use INDPRO: industrial production PAYEMS: nonfarm employment AWHMAN: average weekly hours worked in manufacturing PERMIT: premits for new housing NAPM: purchasing managers index RSXFS: retail sales (excluding food services) For each indicator the first term is the FRED code, the second a description. You can find more about this kind of thing in our Global Economy book, chapter 11. Also in bank reports, which review this kind of information constantly. End of explanation """ fig, ax = plt.subplots() heatmap = ax.pcolor(gs.T, cmap=plt.cm.Blues) ax.invert_yaxis() ax.xaxis.tick_top() #ax.set_yticks(range(5)+0.5) ax.set_xticklabels(gs.index, minor=False) ax.set_yticklabels(gs.columns, minor=False) """ Explanation: Question. How do things look now? Keep in mind that zero is average, which has been pretty good on the whole. Anything between -1 and +1 is ok: it's within one standard deviation of the average. Heatmap See StackOverflow How would you fix this up? End of explanation """ from pandas.io import wb # World Bank api # read data from World Bank iso = ['ZMB', 'BWA', 'TZA'] # country list (ISO codes) var = ['NY.GDP.PCAP.PP.KD', # GDP per person 'SP.POP.TOTL', # population 'IC.BUS.EASE.XQ', # ease of doing business (rank of 189) 'IS.ROD.PAVE.ZS', # paved roads (percent of total) 'SE.ADT.LITR.ZS'] # adult literacy (15 and up) year = 2014 df = wb.download(indicator=var, country=iso, start=2005, end=2014) df """ Explanation: Radar plot ?? Country indicators: Opportunities in Southern Africa Zambeef is a successful meat distributor located in Zambia. Their leadership wonders whether their operation can be expanded to include neighboring Botswana and Tanzania. We collect a number of economic and institutional indicators to assess the business climates in the three countries and ask: What features of an economy are import to this business? What indicators of these features can we find in the World Bank's data? How do the three countries compare on these features? What locations look the most attractive to you? We start by looking through the World Bank's enormous collection of country indicators and using Pandas' World Bank API to access the numbers. End of explanation """
cshankm/rebound
ipython_examples/ParticleIDsAndRemoval.ipynb
gpl-3.0
import rebound import numpy as np def setupSimulation(Nplanets): sim = rebound.Simulation() sim.integrator = "ias15" # IAS15 is the default integrator, so we don't need this line sim.add(m=1.,id=0) for i in range(1,Nbodies): sim.add(m=1e-5,x=i,vy=i**(-0.5),id=i) sim.move_to_com() return sim Nbodies=10 sim = setupSimulation(Nbodies) print([sim.particles[i].id for i in range(sim.N)]) """ Explanation: Assigning particles unique IDs and removing particles from the simulation For some applications, it is useful to keep track of which particle is which, and this can get jumbled up when particles are added or removed from the simulation. It can thefore be useful for particles to have unique IDs associated with them. Let's set up a simple simulation with 10 bodies, and give them IDs in the order we add the particles: End of explanation """ Noutputs = 1000 xs = np.zeros((Nbodies, Noutputs)) ys = np.zeros((Nbodies, Noutputs)) times = np.linspace(0.,50*2.*np.pi, Noutputs, endpoint=False) for i, time in enumerate(times): sim.integrate(time) xs[:,i] = [sim.particles[j].x for j in range(Nbodies)] ys[:,i] = [sim.particles[j].y for j in range(Nbodies)] %matplotlib inline import matplotlib.pyplot as plt fig,ax = plt.subplots(figsize=(15,5)) for i in range(Nbodies): plt.plot(xs[i,:], ys[i,:]) ax.set_aspect('equal') """ Explanation: Now let's do a simple example where we do a short initial integration to isolate the particles that interest us for a longer simulation: End of explanation """ print("ID\tx") for i in range(Nbodies): print("{0}\t{1}".format(i, xs[i,-1])) """ Explanation: At this stage, we might be interested in particles that remained within some semimajor axis range, particles that were in resonance with a particular planet, etc. Let's imagine a simple (albeit arbitrary) case where we only want to keep particles that had $x > 0$ at the end of the preliminary integration. Let's first print out the particle ID and x position. End of explanation """ for i in reversed(range(1,Nbodies)): if xs[i,-1] < 0: sim.remove(i) print("Number of particles after cut = {0}".format(sim.N)) print("IDs of remaining particles = {0}".format([p.id for p in sim.particles])) """ Explanation: Next, let's use the remove() function to filter out particle. As an argument, we pass the corresponding index in the particles array. End of explanation """ sim.remove(2, keepSorted=0) print("Number of particles after cut = {0}".format(sim.N)) print("IDs of remaining particles = {0}".format([p.id for p in sim.particles])) """ Explanation: By default, the remove() function removes the i-th particle from the particles array, and shifts all particles with higher indices down by 1. This ensures that the original order in the particles array is preserved (e.g., to help with output). By running through the planets in reverse order above, we are guaranteed that when a particle with index i gets removed, the particle replacing it doesn't need to also be removed (we already checked it). If you have many particles and many removals (or you don't care about the ordering), you can save the reshuffling of all particles with higher indices with the flag keepSorted=0: End of explanation """ sim.remove(id=9) print("Number of particles after cut = {0}".format(sim.N)) print("IDs of remaining particles = {0}".format([p.id for p in sim.particles])) """ Explanation: We see that the particles array is no longer sorted by ID. Note that the default keepSorted=1 only keeps things sorted (i.e., if they were sorted by ID to start with). If you custom-assign IDs out of order as you add particles, the default will simply preserve the original order. You might also have been surprised that the above sim.remove(2, keepSorted=0) succeeded, since there was no id=2 left in the simulation. That's because remove() takes the index in the particles array, so we removed the 3rd particle (with id=4). If you'd like to remove a particle by id, use the id keyword, e.g. End of explanation """
bjackman/lisa
ipynb/releases/ReleaseNotes_v17.03.ipynb
apache-2.0
from test import LisaTest print LisaTest.__doc__ """ Explanation: Documentation Documentation of many LISA modules has got a big improvement with the usage of Sphinx and the refresh of docstrings for many existing methods. You can access documentation either interactively in Notebooks, using the standard TAB completion after a function name, or by printing it in the Notebook itsels. For example: End of explanation """ from energy_model import EnergyModel print EnergyModel.__doc__ # juno_energy provides an instance of EnergyModel for ARM Juno platforms from platforms.juno_energy import juno_energy import pandas as pd import matplotlib.pyplot as plt %matplotlib inline possible_placements = juno_energy.get_optimal_placements({'task1': 10, 'task2': 15}) fig, axs = plt.subplots(1, 4, sharey=True) # fig.set_ylabel('Utilization') for ax, placement in zip(axs, possible_placements): ax.set_ylabel('Utilization') ax.set_xlabel('CPU') pd.DataFrame(list(placement)).plot(kind='bar', figsize=(16, 2), ax=ax, legend=False) """ Explanation: Energy Model Related APIs The EnergyModel class has been added, which provides methods for describing platforms in order to estimate usage of CPU systems under various utilization scenario. The model is aware of frequency (DVFS) domains, power domains and idle states, as well as "cluster" energy. Tests have been added that utulize the EnergyModel - see below for info about the Generic tests. End of explanation """ from android import Workload print Workload.__doc__ """ Explanation: The above example shows how the EnergyModel class can be used to find optimal task placements. Here it is shown that on ARM Juno, if the system is presented with just two small tasks, it should place them on the same CPU, not using the big CPUs (1 and 2). Trace module Improved profiling analysis The trace analysis module has got a more complete support for analysis of tasks properties. Here is an example notebook which shows the new API in use on a relatively simple example: https://gist.github.com/derkling/256256f47bc9daf4883f3cb6e356e26b Android Support API to run Android Workloads A new API has been adde which allows to defined how to execute and Android workload with the additional support: - to collect a trace across its execution - to measure energy consumption across its execution End of explanation """ { k:v for k,v in vars(Workload).items() if not k.startswith('_') } """ Explanation: Public interface: End of explanation """ !tree $LISA_HOME/libs/utils/android/workloads """ Explanation: The run method is the only one which the user is required to implement to specify how to run the specific Android workload. To create a new workload it's required to create a new module under this folder: libs/utils/android/workloads Here is an enample of usage of this class to run a YouTube workload: https://github.com/ARM-software/lisa/blob/master/libs/utils/android/workloads/youtube.py Android Workloads Using the Workload class, some interesting Android workloads have been already integrated: End of explanation """ from android import LisaBenchmark print LisaBenchmark.__doc__ """ Explanation: ... and others are on their way ;-) API to run Android Tests A new API has been added which allows to defined how to run an Android workload to perform a pre-defined set of experiments. End of explanation """ { k:v for k,v in vars(LisaBenchmark).items() if not k.startswith('_') } """ Explanation: Public interface: End of explanation """
akloster/amplicon_classification
notebooks/amplicon_classification.ipynb
isc
%load_ext autoreload %autoreload 2 import numpy as np import pandas as pd import matplotlib.pyplot as plt import re import pysam import random import feather import h5py %matplotlib inline training_data = feather.read_dataframe("amplicon_training_metadata.feather") test_data = feather.read_dataframe("amplicon_test_metadata.feather") h5f = h5py.File("amplicon_dataset.hdf", "r") training_X = h5f["training/X"][:] training_y = h5f["training/y"][:] test_X = h5f["test/X"][:] test_y = h5f["test/y"][:] def evaluate_model(model): y_hat = model.predict(training_X).argmax(axis=1) training_accuracy = (y_hat == training_y.argmax(axis=1)).sum() / len(training_y) y_hat = model.predict(test_X).argmax(axis=1) test_accuracy = (y_hat == test_y.argmax(axis=1)).sum() / len(test_y) return training_accuracy, test_accuracy from keras.models import Sequential from keras.optimizers import SGD from keras.layers.core import Dense, Activation, Dropout, Flatten from keras.layers.convolutional import Convolution1D, MaxPooling1D from keras.regularizers import l2, activity_l2 """ Explanation: Training Classifiers Neural networks are very flexible algorithms which have revolutionized artificial intelligence in the last few years. Their success is due to new tricks in designing and training the networks, but also the availability of large datasets and the computing power to process them. In this notebook we will look at some simple networks and how they deal with the data we prepared earlier. When working with machine learning, it is a good idea to start with simpler models, and only proceed to more sophisticated methods when you know the limitations of the simple models have been reached. I haven't spent much time tweaking the hyperparameters of these models, so there may be a few additional percent of accuracy here and there, which can be found by tweaking the models a little. More so if a more systematic approach to optimize them was to be used. Another problem is that the test data was used to optimize the models and their hyperparameters. This introduces a certain amount of "data snooping" and slightly defies the purpose of the out-of-sample performance validation. I decided to make this tradeoff because there is already very few data, and splitting off another test set would have hampered performance further. End of explanation """ model = Sequential() model.add(Dense(input_dim=200, output_dim=100, init="glorot_uniform")) model.add(Activation("relu")) model.add(Dense(output_dim=11, init="glorot_uniform")) model.add(Activation("softmax")) model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.05, momentum=0.05, nesterov=True)) for i in range(10): model.fit(training_X, training_y, nb_epoch=10, batch_size=64,verbose=False) print ( "%.2f %.2f" % evaluate_model(model)) """ Explanation: One Relu Layer The first model consists only of one layer. End of explanation """ model = Sequential() model.add(Dense(input_dim=200, output_dim=100, init="glorot_uniform")) model.add(Activation("relu")) model.add(Dropout(0.5)) model.add(Dense(input_dim=200, output_dim=100, init="glorot_uniform")) model.add(Dropout(0.5)) model.add(Dense(output_dim=11, init="glorot_uniform")) model.add(Activation("softmax")) model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.05, momentum=0.05, nesterov=True)) for i in range(20): model.fit(training_X, training_y, nb_epoch=40, batch_size=64,verbose=False) print ("%.2f %.2f" % evaluate_model(model)) """ Explanation: The first column shows the accuracy on the training set, the second on the test set. As we can see, this model performs stunningly well on the training set, but quite bad on the test set. This is due to overfitting. But: Because there are 11 amplicons, 56% isn't disheartiningly bad either.. Two Layers, with dropout The second model contains two layers. Dropout is activated for both of them, in order to reduce overfitting. End of explanation """ model = Sequential() model.add(Dense(input_dim=200, output_dim=200, init="glorot_uniform")) model.add(Activation("relu")) model.add(Dropout(0.25)) model.add(Dense(input_dim=200, output_dim=200, init="glorot_uniform")) model.add(Activation("relu")) model.add(Dropout(0.25)) model.add(Dense(output_dim=11, init="glorot_uniform")) model.add(Activation("softmax")) model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.05, momentum=0.05, nesterov=True)) def batch_feeder(): nb = 200 while 1: index = np.arange(len(training_X)) np.random.shuffle(index) for i in np.arange(0,len(index),nb): sub = index[i:i+nb] X = training_X[sub,:] n,m = X.shape X += np.random.normal(0,0.5,size=(n,m)) * np.random.normal(1,0.1) + np.random.normal(0,0.1) y = training_y[sub,:] yield X, y for i in range(20): model.fit_generator(batch_feeder(), samples_per_epoch=1000000, nb_epoch=1, verbose=0) print ("%.2f %.2f" % evaluate_model(model)) """ Explanation: This model has a significantly harder time to learn/overfit the training set. In turn the test performance is a bit better, but still not useful. Data augmentation When the training performance is high and the test performance is low, it's often a problem of too little data. But what do we do if we don't have enough data? Well, we can just make it up. The idea is to just add a bit of noise to the event data. End of explanation """ # implementational detail: 1D Convolutional layers in keras expect inputs to have three dimensions def reshape3(X): n,m = X.shape Xn = X.copy() Xn.shape = (n,m,1) return Xn def batch_feeder(): nb = 50 while 1: index = np.arange(len(training_X)) np.random.shuffle(index) for i in np.arange(0,len(index),nb): sub = index[i:i+nb] X = training_X[sub,:] n,m = X.shape X += np.random.normal(0,0.5,size=(n,m)) * np.random.normal(1,0.05) + np.random.normal(0,0.1) X = reshape3(X) y = training_y[sub,:] yield (X, y) training_X3 = reshape3(training_X)[:3000,:] test_X3 = reshape3(test_X) def evaluate_model(model): y_hat = model.predict(training_X3).argmax(axis=1) training_accuracy = (y_hat == training_y[:3000,:].argmax(axis=1)).sum() / 3000 y_hat = model.predict(test_X3).argmax(axis=1) test_accuracy = (y_hat == test_y.argmax(axis=1)).sum() / len(test_y) return training_accuracy, test_accuracy model = Sequential() model.add(Convolution1D(nb_filter=20, filter_length=3, border_mode='valid', activation='relu', subsample_length=1, input_shape=(200,1), W_regularizer= l2(0.01), )) model.add(Convolution1D(nb_filter=20, filter_length=3, border_mode='valid', activation='relu', subsample_length=1, input_shape=(200,1), W_regularizer= l2(0.01), )) model.add(Dropout(0.2)) model.add(MaxPooling1D(pool_length=2)) model.add(Flatten()) model.add(Dense(output_dim=50, init="glorot_uniform")) model.add(Activation("relu")) model.add(Dense(output_dim=11, init="glorot_uniform")) model.add(Activation("softmax")) model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.05, momentum=0.05, nesterov=True)) for i in range(10): model.fit_generator(batch_feeder(), samples_per_epoch=20000, nb_epoch=1, verbose=0) print ("%.2f %.2f" % evaluate_model(model)) """ Explanation: Test performance approaches 80%. Still not good enough, but we are getting there. Convolutional Neural Networks One of the most important "tricks" that led to the boom of "deep learning" is the idea of Convolutional Neural Networks. These networks train small feature detectors, for example with a width of three elements and apply these on the full length of the input multiple times. This achieves translational invariance, which means it can recognize patterns independently of where they are occurring, whereas a normal network can only recognize patterns at exactly the same place where they were learned. End of explanation """
OpenBookProjects/ipynb
XKCD-style/XKCD_plots_zh_cn-by-ZQ.ipynb
mit
from IPython.display import Image Image('http://jakevdp.github.com/figures/xkcd_version.png') """ Explanation: Matplotlib 实现 XKCD 样图表 This notebook originally appeared as a blog post at Pythonic Perambulations by Jake Vanderplas. <!-- PELICAN_BEGIN_SUMMARY --> Update: the matplotlib pull request has been merged! See This post for a description of the XKCD functionality now built-in to matplotlib! One of the problems I've had with typical matplotlib figures is that everything in them is so precise, so perfect. For an example of what I mean, take a look at this figure: End of explanation """ Image('http://jakevdp.github.com/figures/mpl_version.png') """ Explanation: Sometimes when showing schematic plots, this is the type of figure I want to display. But drawing it by hand is a pain: I'd rather just use matplotlib. The problem is, matplotlib is a bit too precise. Attempting to duplicate this figure in matplotlib leads to something like this: <!-- PELICAN_END_SUMMARY --> End of explanation """ """ XKCD plot generator ------------------- Author: Jake Vanderplas This is a script that will take any matplotlib line diagram, and convert it to an XKCD-style plot. It will work for plots with line & text elements, including axes labels and titles (but not axes tick labels). The idea for this comes from work by Damon McDougall http://www.mail-archive.com/matplotlib-users@lists.sourceforge.net/msg25499.html """ import numpy as np import pylab as pl from scipy import interpolate, signal import matplotlib.font_manager as fm # We need a special font for the code below. It can be downloaded this way: import os import urllib2 if not os.path.exists('Humor-Sans.ttf'): fhandle = urllib2.urlopen('http://antiyawn.com/uploads/Humor-Sans-1.0.ttf') open('Humor-Sans.ttf', 'wb').write(fhandle.read()) def xkcd_line(x, y, xlim=None, ylim=None, mag=1.0, f1=30, f2=0.05, f3=15): """ Mimic a hand-drawn line from (x, y) data Parameters ---------- x, y : array_like arrays to be modified xlim, ylim : data range the assumed plot range for the modification. If not specified, they will be guessed from the data mag : float magnitude of distortions f1, f2, f3 : int, float, int filtering parameters. f1 gives the size of the window, f2 gives the high-frequency cutoff, f3 gives the size of the filter Returns ------- x, y : ndarrays The modified lines """ x = np.asarray(x) y = np.asarray(y) # get limits for rescaling if xlim is None: xlim = (x.min(), x.max()) if ylim is None: ylim = (y.min(), y.max()) if xlim[1] == xlim[0]: xlim = ylim if ylim[1] == ylim[0]: ylim = xlim # scale the data x_scaled = (x - xlim[0]) * 1. / (xlim[1] - xlim[0]) y_scaled = (y - ylim[0]) * 1. / (ylim[1] - ylim[0]) # compute the total distance along the path dx = x_scaled[1:] - x_scaled[:-1] dy = y_scaled[1:] - y_scaled[:-1] dist_tot = np.sum(np.sqrt(dx * dx + dy * dy)) # number of interpolated points is proportional to the distance Nu = int(200 * dist_tot) u = np.arange(-1, Nu + 1) * 1. / (Nu - 1) # interpolate curve at sampled points k = min(3, len(x) - 1) res = interpolate.splprep([x_scaled, y_scaled], s=0, k=k) x_int, y_int = interpolate.splev(u, res[0]) # we'll perturb perpendicular to the drawn line dx = x_int[2:] - x_int[:-2] dy = y_int[2:] - y_int[:-2] dist = np.sqrt(dx * dx + dy * dy) # create a filtered perturbation coeffs = mag * np.random.normal(0, 0.01, len(x_int) - 2) b = signal.firwin(f1, f2 * dist_tot, window=('kaiser', f3)) response = signal.lfilter(b, 1, coeffs) x_int[1:-1] += response * dy / dist y_int[1:-1] += response * dx / dist # un-scale data x_int = x_int[1:-1] * (xlim[1] - xlim[0]) + xlim[0] y_int = y_int[1:-1] * (ylim[1] - ylim[0]) + ylim[0] return x_int, y_int def XKCDify(ax, mag=1.0, f1=50, f2=0.01, f3=15, bgcolor='w', xaxis_loc=None, yaxis_loc=None, xaxis_arrow='+', yaxis_arrow='+', ax_extend=0.1, expand_axes=False): """Make axis look hand-drawn This adjusts all lines, text, legends, and axes in the figure to look like xkcd plots. Other plot elements are not modified. Parameters ---------- ax : Axes instance the axes to be modified. mag : float the magnitude of the distortion f1, f2, f3 : int, float, int filtering parameters. f1 gives the size of the window, f2 gives the high-frequency cutoff, f3 gives the size of the filter xaxis_loc, yaxis_log : float The locations to draw the x and y axes. If not specified, they will be drawn from the bottom left of the plot xaxis_arrow, yaxis_arrow : str where to draw arrows on the x/y axes. Options are '+', '-', '+-', or '' ax_extend : float How far (fractionally) to extend the drawn axes beyond the original axes limits expand_axes : bool if True, then expand axes to fill the figure (useful if there is only a single axes in the figure) """ # Get axes aspect ext = ax.get_window_extent().extents aspect = (ext[3] - ext[1]) / (ext[2] - ext[0]) xlim = ax.get_xlim() ylim = ax.get_ylim() xspan = xlim[1] - xlim[0] yspan = ylim[1] - xlim[0] xax_lim = (xlim[0] - ax_extend * xspan, xlim[1] + ax_extend * xspan) yax_lim = (ylim[0] - ax_extend * yspan, ylim[1] + ax_extend * yspan) if xaxis_loc is None: xaxis_loc = ylim[0] if yaxis_loc is None: yaxis_loc = xlim[0] # Draw axes xaxis = pl.Line2D([xax_lim[0], xax_lim[1]], [xaxis_loc, xaxis_loc], linestyle='-', color='k') yaxis = pl.Line2D([yaxis_loc, yaxis_loc], [yax_lim[0], yax_lim[1]], linestyle='-', color='k') # Label axes3, 0.5, 'hello', fontsize=14) ax.text(xax_lim[1], xaxis_loc - 0.02 * yspan, ax.get_xlabel(), fontsize=14, ha='right', va='top', rotation=12) ax.text(yaxis_loc - 0.02 * xspan, yax_lim[1], ax.get_ylabel(), fontsize=14, ha='right', va='top', rotation=78) ax.set_xlabel('') ax.set_ylabel('') # Add title ax.text(0.5 * (xax_lim[1] + xax_lim[0]), yax_lim[1], ax.get_title(), ha='center', va='bottom', fontsize=16) ax.set_title('') Nlines = len(ax.lines) lines = [xaxis, yaxis] + [ax.lines.pop(0) for i in range(Nlines)] for line in lines: x, y = line.get_data() x_int, y_int = xkcd_line(x, y, xlim, ylim, mag, f1, f2, f3) # create foreground and background line lw = line.get_linewidth() line.set_linewidth(2 * lw) line.set_data(x_int, y_int) # don't add background line for axes if (line is not xaxis) and (line is not yaxis): line_bg = pl.Line2D(x_int, y_int, color=bgcolor, linewidth=8 * lw) ax.add_line(line_bg) ax.add_line(line) # Draw arrow-heads at the end of axes lines arr1 = 0.03 * np.array([-1, 0, -1]) arr2 = 0.02 * np.array([-1, 0, 1]) arr1[::2] += np.random.normal(0, 0.005, 2) arr2[::2] += np.random.normal(0, 0.005, 2) x, y = xaxis.get_data() if '+' in str(xaxis_arrow): ax.plot(x[-1] + arr1 * xspan * aspect, y[-1] + arr2 * yspan, color='k', lw=2) if '-' in str(xaxis_arrow): ax.plot(x[0] - arr1 * xspan * aspect, y[0] - arr2 * yspan, color='k', lw=2) x, y = yaxis.get_data() if '+' in str(yaxis_arrow): ax.plot(x[-1] + arr2 * xspan * aspect, y[-1] + arr1 * yspan, color='k', lw=2) if '-' in str(yaxis_arrow): ax.plot(x[0] - arr2 * xspan * aspect, y[0] - arr1 * yspan, color='k', lw=2) # Change all the fonts to humor-sans. prop = fm.FontProperties(fname='Humor-Sans.ttf', size=16) for text in ax.texts: text.set_fontproperties(prop) # modify legend leg = ax.get_legend() if leg is not None: leg.set_frame_on(False) for child in leg.get_children(): if isinstance(child, pl.Line2D): x, y = child.get_data() child.set_data(xkcd_line(x, y, mag=10, f1=100, f2=0.001)) child.set_linewidth(2 * child.get_linewidth()) if isinstance(child, pl.Text): child.set_fontproperties(prop) # Set the axis limits ax.set_xlim(xax_lim[0] - 0.1 * xspan, xax_lim[1] + 0.1 * xspan) ax.set_ylim(yax_lim[0] - 0.1 * yspan, yax_lim[1] + 0.1 * yspan) # adjust the axes ax.set_xticks([]) ax.set_yticks([]) if expand_axes: ax.figure.set_facecolor(bgcolor) ax.set_axis_off() ax.set_position([0, 0, 1, 1]) return ax """ Explanation: It just doesn't have the same effect. Matplotlib is great for scientific plots, but sometimes you don't want to be so precise. This subject has recently come up on the matplotlib mailing list, and started some interesting discussions. As near as I can tell, this started with a thread on a mathematica list which prompted a thread on the matplotlib list wondering if the same could be done in matplotlib. Damon McDougall offered a quick solution which was improved by Fernando Perez in this notebook, and within a few days there was a matplotlib pull request offering a very general way to create sketch-style plots in matplotlib. Only a few days from a cool idea to a working implementation: this is one of the most incredible aspects of package development on github. The pull request looks really nice, but will likely not be included in a released version of matplotlib until at least version 1.3. In the mean-time, I wanted a way to play around with these types of plots in a way that is compatible with the current release of matplotlib. To do that, I created the following code: The Code: XKCDify XKCDify will take a matplotlib Axes instance, and modify the plot elements in-place to make them look hand-drawn. First off, we'll need to make sure we have the Humor Sans font. It can be downloaded using the command below. Next we'll create a function xkcd_line to add jitter to lines. We want this to be very general, so we'll normalize the size of the lines, and use a low-pass filter to add correlated noise, perpendicular to the direction of the line. There are a few parameters for this filter that can be tweaked to customize the appearance of the jitter. Finally, we'll create a function which accepts a matplotlib axis, and calls xkcd_line on all lines in the axis. Additionally, we'll switch the font of all text in the axes, and add some background lines for a nice effect where lines cross. We'll also draw axes, and move the axes labels and titles to the appropriate location. End of explanation """ %pylab inline np.random.seed(0) ax = pylab.axes() x = np.linspace(0, 10, 100) ax.plot(x, np.sin(x) * np.exp(-0.1 * (x - 5) ** 2), 'b', lw=1, label='damped sine') ax.plot(x, -np.cos(x) * np.exp(-0.1 * (x - 5) ** 2), 'r', lw=1, label='damped cosine') ax.set_title('check it out!') ax.set_xlabel('x label') ax.set_ylabel('y label') ax.legend(loc='lower right') ax.set_xlim(0, 10) ax.set_ylim(-1.0, 1.0) #XKCDify the axes -- this operates in-place XKCDify(ax, xaxis_loc=0.0, yaxis_loc=1.0, xaxis_arrow='+-', yaxis_arrow='+-', expand_axes=True) """ Explanation: Testing it Out Let's test this out with a simple plot. We'll plot two curves, add some labels, and then call XKCDify on the axis. I think the results are pretty nice! End of explanation """ Image('http://imgs.xkcd.com/comics/front_door.png') """ Explanation: Duplicating an XKCD Comic Now let's see if we can use this to replicated an XKCD comic in matplotlib. This is a good one: End of explanation """ # Some helper functions def norm(x, x0, sigma): return np.exp(-0.5 * (x - x0) ** 2 / sigma ** 2) def sigmoid(x, x0, alpha): return 1. / (1. + np.exp(- (x - x0) / alpha)) # define the curves x = np.linspace(0, 1, 100) y1 = np.sqrt(norm(x, 0.7, 0.05)) + 0.2 * (1.5 - sigmoid(x, 0.8, 0.05)) y2 = 0.2 * norm(x, 0.5, 0.2) + np.sqrt(norm(x, 0.6, 0.05)) + 0.1 * (1 - sigmoid(x, 0.75, 0.05)) y3 = 0.05 + 1.4 * norm(x, 0.85, 0.08) y3[x > 0.85] = 0.05 + 1.4 * norm(x[x > 0.85], 0.85, 0.3) # draw the curves ax = pl.axes() ax.plot(x, y1, c='gray') ax.plot(x, y2, c='blue') ax.plot(x, y3, c='red') ax.text(0.3, -0.1, "Yard") ax.text(0.5, -0.1, "Steps") ax.text(0.7, -0.1, "Door") ax.text(0.9, -0.1, "Inside") ax.text(0.05, 1.1, "fear that\nthere's\nsomething\nbehind me") ax.plot([0.15, 0.2], [1.0, 0.2], '-k', lw=0.5) ax.text(0.25, 0.8, "forward\nspeed") ax.plot([0.32, 0.35], [0.75, 0.35], '-k', lw=0.5) ax.text(0.9, 0.4, "embarrassment") ax.plot([1.0, 0.8], [0.55, 1.05], '-k', lw=0.5) ax.set_title("Walking back to my\nfront door at night:") ax.set_xlim(0, 1) ax.set_ylim(0, 1.5) # modify all the axes elements in-place XKCDify(ax, expand_axes=True) """ Explanation: With the new XKCDify function, this is relatively easy to replicate. The results are not exactly identical, but I think it definitely gets the point across! End of explanation """ #%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py %load_ext version_information %reload_ext version_information %version_information numpy, scipy, matplotlib """ Explanation: Pretty good for a couple hours's work! I think the possibilities here are pretty limitless: this is going to be a hugely useful and popular feature in matplotlib, especially when the sketch artist PR is mature and part of the main package. I imagine using this style of plot for schematic figures in presentations where the normal crisp matplotlib lines look a bit too "scientific". I'm giving a few talks at the end of the month... maybe I'll even use some of this code there. This post was written entirely in an IPython Notebook: the notebook file is available for download here. For more information on blogging with notebooks in octopress, see my previous post on the subject. End of explanation """
zambzamb/zpic
python/Morse and Nielsen 1971.ipynb
agpl-3.0
import em1ds as zpic import numpy as np import matplotlib.pyplot as plt # Thermal velocity uth = [0.05,0.25,0.0] electrons = zpic.Species( "electrons", -1.0, 200, uth = uth ) sim = zpic.Simulation( 150, box = 15.0, dt = 0.08, species = electrons ) """ Explanation: Numerical Simulation of the Weibel Instability in One and Two Dimensions R. L. Morse and C. W. Nielsen Los Alamos Scientific Laboratory, University of California, Los Alamos, New Mexico 87544 The Physics of Fluids, Volume 14, Number 4, April 1971 DOI: 10.1063/1.1693518 In this notebook we reproduce the 1D simulations presented by Morse and Nielsen in their seminal paper of 1971. In this paper they explore the development of the Weibel instability, by initializing a single species with a temperature anisotropy. Specifically, a uniform density electron species is initialized with $u_{thx} = 0.05 \,\rm{c}$ and $u_{thy} = 0.25 \,\rm{c}$. We use the same simulation parameters as the paper, but increase the resolution by a factor of 3 to have more detail on the phase-space plots, which gives 150 cells and a cell size $\Delta x = 0.1 \,c \omega_p^{-1}$, with $3 \times 10^4$ particles. Also note that we End of explanation """ # Disables sorting electrons.n_sort = 0 # Selects particle groups for visualization uy = electrons.particles['uy'] lim = uth[1] * 0.67449 idxa = uy <= -lim idxb = (uy > -lim ) & (uy < lim ) idxc = uy >= lim """ Explanation: Phasespace Analysis The plots in the original paper distiguish particles in 3 groups according to their initial $u_y$ value: the first group holds the 1/4 of particles that is closest to the minimum $u_y$ injected, the second group holds the 1/2 of particles that is closest to $u_y = 0$ and the third group holds the 1/4 of particles that is closest to the maximum $u_y$ injected. To reproduce this, we turn off particle sorting (so the particles will retain their initial indexes) and store the indexes of the particles belonging to each of the groups: End of explanation """ # Routine for generating plots def vis( sim ): f, (ax1,ax2) = plt.subplots(ncols = 2, sharey=True) f.set_size_inches(8,6) # Phasespace plot x = ( electrons.particles['ix'] + electrons.particles['x']) * electrons.dx ux = electrons.particles['ux'] ax1.plot( ux[idxa] - 1, x[idxa], '.', ms=1,alpha=0.4, label = "$u_x^L - 1$") ax1.plot( ux[idxb] , x[idxb], '.', ms=1,alpha=0.4, label = "$u_x^C$") ax1.plot( ux[idxc] + 1, x[idxc], '.', ms=1,alpha=0.4, label = "$u_x^R + 1$") ax1.set_xlabel("$u_x$ [$m_e c$]") ax1.set_title("$u_x - x$ phasespace") ax1.legend() ax1.grid(True) ax1.set_ylabel("$x$ [$c\,\omega_n^{-1}$]") # Magnetic field plot ax2.plot( sim.emf.Bz, np.linspace(0, sim.box, num = sim.nx), label = "$B_z$") ax2.set_xlim(left=-0.15,right=0.15) ax2.grid(True) ax2.set_xlabel("$B_Z$ field []") ax2.set_title("Magnetic Field") ax2.legend() f.suptitle("t = {:g}".format(sim.t) + " $\omega_p^{-1}$") plt.show() """ Explanation: For simplicity we define a routine to generate a side-by-side plot of the particle phasespace, with particles divided into 3 groups as described above, and the magnetic field along the z direction, $B_z$, like the plots in Fig. 3 of the paper: End of explanation """ sim.run(30.0) vis(sim) sim.run(40.0) vis(sim) sim.run(50.0) vis(sim) sim.run(100.0) vis(sim) sim.run(200.0) vis(sim) sim.run(400.0) vis(sim) """ Explanation: We now run the simulation until the specified times and visualize results for each one: End of explanation """ electrons = zpic.Species( "electrons", -1.0, 1000, uth = [0.05,0.25,0.0] ) sim = zpic.Simulation( 150, box = 15.0, dt = 0.08, species = electrons ) """ Explanation: Field Energy Evolution To recreate the plot showing the evolution of the field energy (Fig.4) we rerun the simulation storing the energy values for all time-steps. The initialization is the same as before: End of explanation """ import math tmax = 400 niter = int(math.ceil(tmax / sim.dt)) EneB = np.zeros(niter) EneE = np.zeros(niter) norm = 0.5 * sim.emf.nx * sim.box / sim.nx print("\nRunning simulation up to t = {:g} ...".format(tmax)) while sim.t < tmax: print('n = {:d}, t = {:g}'.format(sim.n,sim.t), end = '\r') EneB[sim.n] = np.sum(sim.emf.Bx**2+sim.emf.By**2+sim.emf.Bz**2) * norm EneE[sim.n] = np.sum(sim.emf.Ex**2+sim.emf.Ey**2+sim.emf.Ez**2) * norm sim.iter() print("\nDone.") plt.plot(np.linspace(0, sim.t, num = niter),EneB, label = "MAGNETIC") plt.plot(np.linspace(0, sim.t, num = niter),EneE, label = "ELECTRIC") plt.yscale('log') plt.ylim(ymin=0.01) plt.grid(True) plt.xlabel("$t$ [$1/\omega_n$]") plt.ylabel("Field energy [$m_e c^2$]") plt.title("Electro-Magnetic field energy") plt.legend() plt.show() """ Explanation: To store the field energy we run the simulation with a customized loop to store these values at every time-step: End of explanation """
GoogleCloudPlatform/asl-ml-immersion
notebooks/kubeflow_pipelines/pipelines/labs/kfp_pipeline_vertex_automl_online_predictions.ipynb
apache-2.0
from google.cloud import aiplatform REGION = "us-central1" PROJECT = !(gcloud config get-value project) PROJECT = PROJECT[0] # Set `PATH` to include the directory containing KFP CLI PATH = %env PATH %env PATH=/home/jupyter/.local/bin:{PATH} """ Explanation: Continuous Training with AutoML Vertex Pipelines Learning Objectives: 1. Learn how to use Vertex AutoML pre-built components 1. Learn how to build a Vertex AutoML pipeline with these components using BigQuery as a data source 1. Learn how to compile, upload, and run the Vertex AutoML pipeline In this lab, you will build, deploy, and run a Vertex AutoML pipeline that orchestrates the Vertex AutoML AI services to train, tune, and deploy a model. Setup End of explanation """ %%writefile ./pipeline_vertex/pipeline_vertex_automl.py # Copyright 2021 Google LLC # Licensed under the Apache License, Version 2.0 (the "License"); you may not # use this file except in compliance with the License. You may obtain a copy of # the License at # https://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" # BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either # express or implied. See the License for the specific language governing # permissions and limitations under the License. """Kubeflow Covertype Pipeline.""" import os from google_cloud_pipeline_components.aiplatform import ( AutoMLTabularTrainingJobRunOp, EndpointCreateOp, ModelDeployOp, TabularDatasetCreateOp, ) from kfp.v2 import dsl PIPELINE_ROOT = os.getenv("PIPELINE_ROOT") PROJECT = os.getenv("PROJECT") DATASET_SOURCE = os.getenv("DATASET_SOURCE") PIPELINE_NAME = os.getenv("PIPELINE_NAME", "covertype") DISPLAY_NAME = os.getenv("MODEL_DISPLAY_NAME", PIPELINE_NAME) TARGET_COLUMN = os.getenv("TARGET_COLUMN", "Cover_Type") SERVING_MACHINE_TYPE = os.getenv("SERVING_MACHINE_TYPE", "n1-standard-16") @dsl.pipeline( name=f"{PIPELINE_NAME}-vertex-automl-pipeline", description=f"AutoML Vertex Pipeline for {PIPELINE_NAME}", pipeline_root=PIPELINE_ROOT, ) def create_pipeline(): dataset_create_task = TabularDatasetCreateOp( # TODO ) automl_training_task = AutoMLTabularTrainingJobRunOp( # TODO ) endpoint_create_task = EndpointCreateOp( # TODO ) model_deploy_task = ModelDeployOp( # pylint: disable=unused-variable # TODO ) """ Explanation: Understanding the pipeline design The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the pipeline_vertex/pipeline_vertex_automl.py file that we will generate below. The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables. Building and deploying the pipeline Exercise Complete the pipeline below: End of explanation """ ARTIFACT_STORE = f"gs://{PROJECT}-kfp-artifact-store" PIPELINE_ROOT = f"{ARTIFACT_STORE}/pipeline" DATASET_SOURCE = f"bq://{PROJECT}.covertype_dataset.covertype" %env PIPELINE_ROOT={PIPELINE_ROOT} %env PROJECT={PROJECT} %env REGION={REGION} %env DATASET_SOURCE={DATASET_SOURCE} """ Explanation: Compile the pipeline Let's start by defining the environment variables that will be passed to the pipeline compiler: End of explanation """ !gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE} """ Explanation: Let us make sure that the ARTIFACT_STORE has been created, and let us create it if not: End of explanation """ PIPELINE_JSON = "covertype_automl_vertex_pipeline.json" """ Explanation: Use the CLI compiler to compile the pipeline We compile the pipeline from the Python file we generated into a JSON description using the following command: End of explanation """ # TODO """ Explanation: Exercise Compile the pipeline with the dsl-compile-v2 command line: End of explanation """ !head {PIPELINE_JSON} """ Explanation: Note: You can also use the Python SDK to compile the pipeline: ```python from kfp.v2 import compiler compiler.Compiler().compile( pipeline_func=create_pipeline, package_path=PIPELINE_JSON, ) ``` The result is the pipeline file. End of explanation """ # TODO """ Explanation: Deploy the pipeline package Exercise Upload and run the pipeline to Vertex AI using aiplatform.PipelineJob: End of explanation """
aitatanit/metatlas
4notebooks/ISTD Assessment.ipynb
bsd-3-clause
import sys sys.path.insert(0,'/project/projectdirs/metatlas/projects/ms_monitor_tools' ) import warnings warnings.filterwarnings('ignore') import ms_monitor_util as mtools %matplotlib notebook """ Explanation: Assess and Monitor QCs, Internal Standards, and Common Metabolites This notebook will guide people to Identify their files Specify the LC/MS method used Specify the text-string used to differentiate blanks, QCs, and experimental injections Populate the run log with the pass/fail outcome for each run Run each block below. They will indicate "ok" when completed. Clear all output prior to starting makes it easier to tell when cells are completed. End of explanation """ num_days = raw_input('How many days back to search: ') experiment = mtools.get_recent_experiments(num_days = int(num_days)) """ Explanation: The "FutureWarning" above is normal. Run the next block to select your experiment. End of explanation """ mtools = reload(mtools) files = mtools.get_files_for_experiment(experiment.value) """ Explanation: After specifying an experiment above, run the next block to get files for that experiment. End of explanation """ # mz_tolerance, rt_tolerance = mtools.get_rt_mz_tolerance_from_user() method = mtools.get_method_dropdown() """ Explanation: Run the next block to specify: m/z tolerance (ppm) Retention time tolerance (minutes) 20 ppm and 0.3 minutes are good for most runs End of explanation """ #TODO: have another sheet that creates an Atlas from the google sheet. Get the values from teh atlas instead. qc_hilic_vals, common_hilic_vals, istd_hilic_vals = mtools.filter_istd_qc_by_method(method.value,0.3,5) print "ok" """ Explanation: Get Data from Reference End of explanation """ # optional block to view the atlases selected # istd_hilic_vals # qc_hilic_vals # common_hilic_vals """ Explanation: Uncomment and run the lines below to print the values in the reference table: You can also view the source of these references here. End of explanation """ qc_str,blank_str,pos_str,neg_str = mtools.get_blank_qc_pos_neg_string() """ Explanation: Specify the blank, qc and pos-neg string used in your file naming End of explanation """ print "Method = ",method.value print "Experiment = ",experiment.value print len(files), " files queued for assessment" print "filter strings are: ", qc_str.value, blank_str.value, pos_str.value, neg_str.value # print qc_str.value,blank_str.value import pandas as pd sys.path.insert(0,'/global/project/projectdirs/metatlas/anaconda/lib/python2.7/site-packages' ) #sys.path.append('/project/projectdirs/metatlas/projects/ms_monitor_tools') import metatlas_get_data_helper_fun as ma_data from metatlas import metatlas_objects as metob from metatlas import h5_query as h5q from datetime import datetime df = pd.DataFrame() counter = 0 for my_file in files: # finfo = h5q.get_info(my_file.hdf5_file) # num_pos_data = finfo['ms1_pos']['nrows'] + finfo['ms2_pos']['nrows'] # num_neg_data = finfo['ms1_neg']['nrows'] + finfo['ms2_neg']['nrows'] # do_polarity = [] # if num_pos_data > 0: # do_polarity.append('positive') # if num_neg_data > 0: # do_polarity.append('negative') df.loc[counter, 'name has blank'] = blank_str.value in my_file.name.upper() df.loc[counter, 'name has QC'] = qc_str.value in my_file.name.upper() df.loc[counter, 'name has pos'] = pos_str.value in my_file.name.upper() df.loc[counter, 'name has neg'] = neg_str.value in my_file.name.upper() df.loc[counter, 'experiment'] = my_file.experiment df.loc[counter, 'filename'] = my_file.name df.loc[counter, 'datestamp'] = my_file.creation_time df.loc[counter, 'utc time'] = datetime.utcfromtimestamp(my_file.creation_time) df.loc[counter, 'lcms method'] = my_file.method #TODO: get instrument and lcms from the method object df.loc[counter, 'sample'] = my_file.sample df.loc[counter, 'username'] = my_file.username # df.loc[counter, 'num positive data'] = num_pos_data # df.loc[counter, 'num negative data'] = num_neg_data counter = counter + 1 print counter # for compound in atlas.compound_identifications: # if compound.mz_references[0].detected_polarity in do_polarity: # result = ma_data.get_data_for_a_compound(mz_ref, # rt_ref,[ 'ms1_summary' ], # myFile,0.3) #extra_time is not used by ms1_summary # df.loc[counter, 'identification name'] = compound.name # df.loc[counter, 'compound name'] = compound.compound[0].name #TODO: need to get all possible # for k in result['ms1_summary'].keys(): # if result['ms1_summary'][k]: # df.loc[counter, 'measured %s'%k] = result['ms1_summary'][k] # else: # df.loc[counter, 'measured %s'%k] = '' # #{'polarity': [], 'rt_centroid': [], 'mz_peak': [], 'peak_height': [], 'rt_peak': [], 'peak_area': [], 'mz_centroid': []} # df.loc[counter, 'expected mz'] = compound.mz_references[0].mz # df.loc[counter, 'expected rt'] = compound.rt_references[0].rt_peak # df.loc[counter, 'expected polarity'] = compound.mz_references[0].detected_polarity # if result['ms1_summary']['rt_peak']: # df.loc[counter, 'delta rt'] = compound.rt_references[0].rt_peak - result['ms1_summary']['rt_peak'] # df.loc[counter, 'delta mz'] = (compound.mz_references[0].mz - result['ms1_summary']['mz_centroid']) / compound.mz_references[0].mz * 1e6 # counter = counter + 1 df # df.to_excel('bc_istd_neg_assessment_table.xls') datetime.datetime. import sys sys.path.insert(0,'/global/project/projectdirs/metatlas/anaconda/lib/python2.7/site-packages' ) from metatlas import h5_query as h5q info = h5q.get_info(files[0]) print info d = h5q.get_data(files[0],ms_level=1,polarity = 0) d.shape myruns = [] for f in files: mf = metob.retrieve('LcmsRun',hdf5_file = f,username = '*')[-1] myruns.append(mf) mtools = reload(mtools) get_exp = mtools.get_files_from_recent_experiment(4) display(get_exp) # import json # print get_exp.get_selected_rows() # print json.loads(get_exp.get_state()['_df_json'])[0] #identify which files are blank, QC, or sample # make the QC figure # A plot for delta-mz, delta-rt, and delta-intensity for each QC file # make the ISTD figure # A plot for delta-mz, delta-rt, and delta-intensity for each not-blank and not-QC file # make the blank figure # Plot the TIC of each blank and a reference blank # are all chromatograms less than some intensity # helper script to let users check specific ions #log the results in a csv file # make a qgrid widget atlases_to_merge = {'20160108_TS_Negative_Hilic_6550_QCs', '20160119_TS_Positive_Hilic_QE_QCs_v1', '20151130_LS_Positive_Hilic_QExactive_Archetypes_ISTDs', '20151130_LS_Negative_Hilic_QExactive_Archetypes_ISTDs'} atlases = metob.retrieve('Atlas',name = '%_istd_%',username='*') for i,a in enumerate(atlases): print i,a.name,a.username,a.creation_time atlas = atlases[1] # atlas = metob.retrieve('Atlas',name = '20160119_TS_Positive_Hilic_QE_QCs_v1',username='*')[-1] import pandas as pd df = pd.DataFrame() counter = 0 for j in range(len(files)): myFile = files[j].hdf5_file finfo = h5q.get_info(myFile) num_pos_data = finfo['ms1_pos']['nrows'] + finfo['ms2_pos']['nrows'] num_neg_data = finfo['ms1_neg']['nrows'] + finfo['ms2_neg']['nrows'] do_polarity = [] if num_pos_data > 0: do_polarity.append('positive') if num_neg_data > 0: do_polarity.append('negative') for compound in atlas.compound_identifications: if compound.mz_references[0].detected_polarity in do_polarity: result = ma_data.get_data_for_a_compound(compound.mz_references[0], compound.rt_references[0],[ 'ms1_summary' ], myFile,0.3) #extra_time is not used by ms1_summary df.loc[counter, 'is blank'] = '_BLANK' in files[j].name.upper() df.loc[counter, 'is QC'] = '_QC_' in files[j].name.upper() df.loc[counter, 'experiment'] = files[j].experiment df.loc[counter, 'filename'] = files[j].name df.loc[counter, 'datestamp'] = files[j].creation_time df.loc[counter, 'utc time'] = datetime.utcfromtimestamp(files[j].creation_time) df.loc[counter, 'lcms method'] = files[j].method #TODO: get instrument and lcms from the method object df.loc[counter, 'sample'] = files[j].sample df.loc[counter, 'identification name'] = compound.name df.loc[counter, 'compound name'] = compound.compound[0].name #TODO: need to get all possible df.loc[counter, 'username'] = files[j].username df.loc[counter, 'num positive data'] = num_pos_data df.loc[counter, 'num negative data'] = num_neg_data for k in result['ms1_summary'].keys(): if result['ms1_summary'][k]: df.loc[counter, 'measured %s'%k] = result['ms1_summary'][k] else: df.loc[counter, 'measured %s'%k] = '' #{'polarity': [], 'rt_centroid': [], 'mz_peak': [], 'peak_height': [], 'rt_peak': [], 'peak_area': [], 'mz_centroid': []} df.loc[counter, 'expected mz'] = compound.mz_references[0].mz df.loc[counter, 'expected rt'] = compound.rt_references[0].rt_peak df.loc[counter, 'expected polarity'] = compound.mz_references[0].detected_polarity if result['ms1_summary']['rt_peak']: df.loc[counter, 'delta rt'] = compound.rt_references[0].rt_peak - result['ms1_summary']['rt_peak'] df.loc[counter, 'delta mz'] = (compound.mz_references[0].mz - result['ms1_summary']['mz_centroid']) / compound.mz_references[0].mz * 1e6 counter = counter + 1 df.to_excel('bc_istd_neg_assessment_table.xls') '_BLANK_' in files[j].name.upper() # do_polarity # compound.mz_references[0].detected_polarity summary = data[1][1]['data']['ms1_summary'] expected_mz = data[1][1]['identification'].mz_references[0].mz expected_rt = data[1][1]['identification'].rt_references[0].rt_peak print summary print expected_mz, expected_rt import os print data[1][1]['lcmsrun'].experiment print data[1][1]['lcmsrun'].name print datetime.utcfromtimestamp(data[1][1]['lcmsrun'].creation_time) print data[1][1]['lcmsrun'].method print data[1][1]['lcmsrun'].sample print data[1][1]['lcmsrun'].username # where to store expected peak area? I think it should be in an atlas for now, but should be an intensity reference. # myFile # with open(myFile) as f: # h5q.get_info(myFile) compound.mz_references[0]['detected_polarity'] == 'positive' from matplotlib import pyplot as plt from matplotlib import patches as patches %matplotlib notebook class ClickablePoint: def __init__(self, p,index): self.point = p self.press = None self.index = index def connect(self): self.cidpress = self.point.figure.canvas.mpl_connect('button_press_event', self.button_press_event) self.cidrelease = self.point.figure.canvas.mpl_connect('button_release_event', self.button_release_event) def disconnect(self): self.point.figure.canvas.mpl_disconnect(self.cidpress) self.point.figure.canvas.mpl_disconnect(self.cidrelease) def button_press_event(self,event): if event.inaxes != self.point.axes: return contains = self.point.contains(event)[0] if not contains: return self.press = self.point.center, event.xdata, event.ydata if self.press is None: return if event.inaxes != self.point.axes: return self.point.center, xpress, ypress = self.press plt.title('%d %5.4f %5.4f'%(self.index,self.point.center[0], self.point.center[1])) def button_release_event(self,event): self.press = None self.point.figure.canvas.draw() fig = plt.figure(figsize=(12, 12)) ax = fig.add_subplot(111) ax.set_xlim(-1,2) ax.set_ylim(-1,2) circles = [] circle1 = patches.Circle((0.32,0.3), 0.2, fc='r',alpha=0.5, picker=True) circle = patches.Circle((0.3,0.3), 0.2, fc='b', alpha=0.5, picker=True) circles.append(ax.add_patch(circle1)) circles.append(ax.add_patch(circle)) drs = [] for i,c in enumerate(circles): #print c.center[0] dr = ClickablePoint(c,i) dr.connect() drs.append(dr) plt.show() """ Explanation: Check that everything is correct by running the next cell End of explanation """
IanHawke/Southampton-PV-NumericalMethods-2016
solutions/01-Integration.ipynb
mit
from __future__ import division import numpy data_southampton_2005 = numpy.loadtxt('../data/irradiance/southampton_2005.txt') """ Explanation: Integration How much solar power was available to be collected in Southampton in 2005? To answer this, we need to integrate the solar irradiance data, to get the insolation, \begin{equation} H = \int \text{d}t \, I(t). \end{equation} There's a data file containing the irradiance data (simplified and tidied up) from HelioClim in the repository. Let's load it in. End of explanation """ data_southampton_2005 """ Explanation: If we ask Python what the data is, it will show us the first and last few entries: End of explanation """ %matplotlib notebook from matplotlib import pyplot pyplot.figure(figsize=(10,6)) pyplot.scatter(data_southampton_2005[:,0], data_southampton_2005[:,1], label="Southampton, 2005") pyplot.legend() pyplot.xlabel("Time (hours)") pyplot.ylabel(r"Horizontal diffuse irradiance ($Wh \, m^{-2}$)") pyplot.show() """ Explanation: Try looking at the data file in a text editor. The header comment says that the first column contains the time in hours since midnight, January 1st. We see that the file gives data every quarter of an hour for the year. Let's plot the data to see the trend. End of explanation """ pyplot.figure(figsize=(10,6)) pyplot.scatter(data_southampton_2005[:200,0], data_southampton_2005[:200,1], label="Southampton, 2005") pyplot.legend() pyplot.xlabel("Time (hours)") pyplot.ylabel(r"Horizontal diffuse irradiance ($Wh \, m^{-2}$)") pyplot.show() """ Explanation: You'll need to zoom right in to see individual days. It's pretty noisy, and there's points where the data is corrupted. Still, integration should smooth that out. Let's plot a small segment of the data - the first 200 data points, which is roughly 48 hours. We'll use that to think about integration. End of explanation """ pyplot.figure(figsize=(10,6)) pyplot.bar(data_southampton_2005[:200,0], data_southampton_2005[:200,1], label="Southampton, 2005", width=0.25) pyplot.legend() pyplot.xlabel("Time (hours)") pyplot.ylabel(r"Horizontal diffuse irradiance ($Wh \, m^{-2}$)") pyplot.show() """ Explanation: This is how the data looks as individual points. Let's instead plot it as a bar chart. End of explanation """ dt = 0.25 H_48hours = dt * numpy.sum(data_southampton_2005[:200,1]) print("Two day insolation is {}".format(H_48hours)) """ Explanation: The integral is the area under the curve. In deriving how to integrate, we split the domain into subintervals and approximate the area in that subinterval as the width of the subinterval times the (constant) value in that subinterval. In other words: find the area of each bar, and add them up. The area of each bar is the data value multiplied by the time step (a quarter of an hour, here). So the total integral over these two days is given by: End of explanation """ H = dt * numpy.sum(data_southampton_2005[:,1]) print("Insolation for Southampton in 2005 is {}".format(H)) """ Explanation: Is this a sensible number? There's roughly 8 hours of sun each day. On day 1 the maximum irradiance is around 10; on day 2 it's around 35. So the maximum value would be around $8 \times 10 + 8 \times 35 = 360$: we're in the right ballpark. So the total insolation for 2005 is: End of explanation """ from glob import glob files = glob("../data/irradiance/*.txt") for f in files: end = f.split("/")[-1] place, year_txt = end.split("_") year = year_txt.split(".")[0] data = numpy.loadtxt(f) H = dt * numpy.sum(data[:,1]) print("Insolation for {} in {} is {}".format(place.title(), year, H)) """ Explanation: Exercise In the data directory you'll find data files for all the CDT sites for both 2004 and 2005. Compute the insolation for at least one more. If you're feeling inspired, try the glob library to compute them all automatically. Solution End of explanation """ pyplot.figure(figsize=(10,6)) pyplot.bar(data_southampton_2005[120:170,0], data_southampton_2005[120:170,1], label="Southampton, 2005", width=0.25) pyplot.legend() pyplot.xlabel("Time (hours)") pyplot.ylabel(r"Horizontal diffuse irradiance ($Wh \, m^{-2}$)") pyplot.show() """ Explanation: Improving the integral There's a problem with the simple rule that we've used, which is clear when we take a closer look at the data. Let's go back to our plots: End of explanation """ pyplot.figure(figsize=(10,6)) pyplot.plot(data_southampton_2005[120:170,0], data_southampton_2005[120:170,1], label="Southampton, 2005") pyplot.legend() pyplot.xlabel("Time (hours)") pyplot.ylabel(r"Horizontal diffuse irradiance ($Wh \, m^{-2}$)") pyplot.show() """ Explanation: There's a clear trend in the data, but we're sampling it very coarsely. We could imagine smoothing this data considerably. Even the simplest thing - joining the points with straight lines - would be an improvement: End of explanation """ pyplot.figure(figsize=(10,6)) pyplot.fill_between(data_southampton_2005[150:152,0], data_southampton_2005[150:152,1]) pyplot.fill_between(data_southampton_2005[151:153,0], data_southampton_2005[151:153,1]) pyplot.fill_between(data_southampton_2005[152:154,0], data_southampton_2005[152:154,1]) pyplot.xlabel("Time (hours)") pyplot.ylabel(r"Horizontal diffuse irradiance ($Wh \, m^{-2}$)") pyplot.show() """ Explanation: The integral is still the area under this curve. And the full domain is still split up into quarter hour subintervals. The difference is that the irradiance is now a straight line on each subinterval, not a constant value. So each subinterval is represented by a trapezoid, not by a bar: End of explanation """ H_trap = dt / 2.0 * (data_southampton_2005[0, 1] + data_southampton_2005[-1, 1] + 2.0*numpy.sum(data_southampton_2005[1:-1, 1])) print("Insolation for Southampton using trapezoidal rule is {}".format(H_trap)) """ Explanation: The area of a trapezoid is given by its width times the average of the value at the left and right edge. We now need some notation. We'll denote the times at which we have data as $t_j$, where $j$ is an integer, $j = 0, \dots, N$. So we have $N+1$ data points in all, leading to $N$ subintervals. The irradiance $I$ at time $t_j$ will be $I_j = I(t_j)$. The $j^{\text{th}}$ subinterval has $t_j$ at its left edge and $t_{j+1}$ on its right. So the area of this subinterval is \begin{equation} H_j = \frac{1}{2} \Delta t \, \left( I_j + I_{j+1} \right). \end{equation} The total insolation, which is the total integral, is the total area of all the subintervals: the sum of all the $H_j$: \begin{equation} H = \sum_{j=0}^N \frac{1}{2} \Delta t \, \left( I_j + I_{j+1} \right). \end{equation} As the right edge of the $j^{\text{th}}$ subinterval is the left edge of the $(j+1)^{\text{th}}$ subinterval (except for the endpoints), every point except the first and last is counted twice. So we can rewrite this as \begin{equation} H = \frac{\Delta t}{2} \, \left( I_{0} + I_{N} + 2 \sum_{j=1}^{N-1} I_j \right). \end{equation} This is the trapezoidal rule. Let's test it: End of explanation """ def I_analytic(t): return numpy.sin(2*numpy.pi*t/47)**4 * (1.0 + 0.1 * numpy.sin(2*numpy.pi*t/(24*365))) t = numpy.linspace(0, 365*24, 10000) pyplot.figure(figsize=(10,6)) pyplot.plot(t, I_analytic(t)) pyplot.show() """ Explanation: In this case the result isn't just close to the previous version - it's identical. We need some evidence that it's actually going to be better in general. Convergence To do this we'll use an integrand given by a specific function rather than a data set. The function \begin{equation} I(t) = \sin^4 \left( \frac{2 \pi t}{47} \right) \times \left( 1 + 0.1 \times \sin \left(\frac{2 \pi t}{365 \times 24} \right) \right) \end{equation} is similar in form to the data. The integral has solution \begin{equation} \int_0^{24 \times 365} \text{d} t \, I(t) \approx 3286.814506640615. \end{equation} End of explanation """ k = numpy.arange(10, 19) Nsubintervals = 2**k dts = 24.0*365.0 / Nsubintervals error_riemann = numpy.zeros_like(dts) error_trapezoidal = numpy.zeros_like(dts) H_exact = 3286.814506640615 for i, dt in enumerate(dts): t = numpy.linspace(0, 24.0*365.0, Nsubintervals[i]+1) It = I_analytic(t) H_riemann = dt * numpy.sum(It) H_trapezoidal = dt/2*(It[0] + It[-1] + 2.0*numpy.sum(It[1:-1])) error_riemann[i] = abs(H_exact-H_riemann) error_trapezoidal[i] = abs(H_exact-H_trapezoidal) pyplot.figure(figsize=(10,6)) pyplot.loglog(dts, error_riemann, 'kx', label="Riemann integral") pyplot.loglog(dts, error_trapezoidal, 'bo', label="Trapezoidal rule") pyplot.loglog(dts, error_riemann[-1]/dts[-1]*dts, 'k--', label=r"$\propto \Delta t$") pyplot.loglog(dts, error_trapezoidal[-1]/dts[-1]**2*dts**2, 'b--', label=r"$\propto (\Delta t)^2$") pyplot.xlabel(r"$\Delta t$") pyplot.ylabel("Error") pyplot.legend(loc="upper left") pyplot.show() """ Explanation: We can compute the numerical solution using $N = 2^k$ subintervals, where $k = 10, \dots, 18$, using both the Riemann integral and the trapezoidal rule: End of explanation """ error_simpson = numpy.zeros_like(dts) for i, dt in enumerate(dts): t = numpy.linspace(0, 24.0*365.0, Nsubintervals[i]+1) It = I_analytic(t) H_simpson = dt/3*(It[0] + It[-1] + 2.0*numpy.sum(It[2:-1:2]) + 4.0*numpy.sum(It[1:-1:2])) error_simpson[i] = abs(H_exact-H_simpson) pyplot.figure(figsize=(10,6)) pyplot.loglog(dts, error_riemann, 'kx', label="Riemann integral") pyplot.loglog(dts, error_trapezoidal, 'bo', label="Trapezoidal rule") pyplot.loglog(dts, error_simpson, 'g^', label="Simpson's rule") pyplot.loglog(dts, error_riemann[-1]/dts[-1]*dts, 'k--', label=r"$\propto \Delta t$") pyplot.loglog(dts, error_trapezoidal[-1]/dts[-1]**2*dts**2, 'b--', label=r"$\propto (\Delta t)^2$") pyplot.loglog(dts, error_simpson[-1]/dts[-1]**4*dts**4, 'g--', label=r"$\propto (\Delta t)^4$") pyplot.xlabel(r"$\Delta t$") pyplot.ylabel("Error") pyplot.legend(loc="lower right") pyplot.show() """ Explanation: As the number of points increases, the timestep decreases, and the error decreases in both cases. But the error decreases much faster for the trapezoidal rule. The logarithmic scale shows that the error $E$ is going down as a power law, \begin{equation} \log(E) \sim s \log(\Delta t) \quad \implies \quad E \sim (\Delta t)^s, \end{equation} where $s$ is the slope of the straight line in the log-log plot. This shows that the Riemann integral error goes down by the amount that the timestep is reduced each time; the trapezoidal rule error goes down by this amount squared. When the timestep is reduced by a factor of ten, the Riemann integral error goes down by ten, the trapezoidal rule by a hundred. Algorithms like the Riemann integral are called first order ($s=1$), and like the trapezoidal rule are called second order ($s=2$). Exercise Show that Simpson's rule \begin{equation} H = \frac{\Delta t}{3} \, \left( I_{0} + I_{N} + 2 \sum_{j=1}^{N/2-1} I_{2 j} + 4 \sum_{j=1}^{N/2} I_{2 j - 1} \right) \end{equation} is fourth order ($s=4$). Simpson's rule requires an even number of subintervals $N$, and is the result of fitting a quadratic through each three neighbouring points. Solution End of explanation """ def I_spatial(x, y, t): return (1.0 + numpy.cos(2*numpy.pi*x)**2 * numpy.sin(4*numpy.pi*y)**4) * numpy.sin(2*numpy.pi*t/47)**4 x = numpy.linspace(0, 1) y = numpy.linspace(0, 1) X, Y = numpy.meshgrid(x, y) I10 = I_spatial(X, Y, 10) pyplot.figure(figsize=(10,6)) pyplot.contour(X, Y, I10) pyplot.xlabel(r"$x$") pyplot.xlabel(r"$y$") pyplot.title(r"Irradiance at $t=10$ hours") pyplot.show() """ Explanation: Multi dimensional integrals Let's suppose that irradiance varies with location for some odd location, \begin{equation} I(x, y, t) = \left( 1 + \cos^2 \left( 2 \pi x \right) \sin^4 \left( 4 \pi y \right) \right) \sin^4 \left( \frac{2 \pi t}{47} \right). \end{equation} We want to know the total insolation over a 24 hour period, over the area $x \in [0, 1], y \in [0, 1]$. The solution is $H \approx 10.464842515116615$. End of explanation """ Nintervals = 32 ts = numpy.linspace(0, 24, Nintervals+1) dt = 24 / Nintervals ys = numpy.linspace(0, 1, Nintervals+1) dy = 1 / Nintervals xs = numpy.linspace(0, 1, Nintervals+1) dx = 1 / Nintervals H = 0 for t in ts: integral_yx = 0 for y in ys: integral_x = 0 for x in xs: integral_x += dx * I_spatial(x, y, t) integral_yx += dy * integral_x H += dt * integral_yx print("Total insolation is {}".format(H)) """ Explanation: We can perform the integral along each axis: \begin{equation} H = \int_0^{24} \text{d}t \, \int_0^1 \text{d}y \, \int_0^1 \text{d}x \, I(x, y, t). \end{equation} Let's do that for the Riemann integral: End of explanation """ from numpy.random import rand k = numpy.arange(4, 25) Nsamples = 2**k error_mc = numpy.zeros(len(Nsamples)) H_exact = 3286.814506640615 for i, N in enumerate(Nsamples): t = rand(N) * 24*365 It = I_analytic(t) H_mc = (24*365) * numpy.sum(It) / N error_mc[i] = abs(H_exact-H_mc) pyplot.figure(figsize=(10,6)) pyplot.loglog(Nsamples, error_mc, 'bo', label="Monte Carlo") pyplot.loglog(Nsamples, error_mc[-1]/Nsamples[-1]**(-1/2)*Nsamples**(-1/2), 'b--', label=r"$\propto N^{-1/2}$") pyplot.xlabel(r"$N$") pyplot.ylabel("Error") pyplot.legend(loc="lower left") pyplot.show() """ Explanation: We've only used 32 subintervals in each dimension here, which won't be very accurate. But increasing the accuracy is hard: doubling the number of intervals, in order to double the accuracy, means we have to do eight times as much work. Even using Simpson's rule we'd be in trouble. This integral was over a standard, "real" space. If we wanted to work out the probability of something happening we'd want to integrate over parameter space. How many parameters would you typically have? As soon as the dimensions of the integral get above $\sim$5 the computational cost of standard integration rapidly spirals out of control. Monte Carlo integration Monte Carlo methods use random sampling. When integrating in one dimension, they choose $N$ points $t_j$ from the range of integration at random, and then approximate \begin{equation} H = \int_a^b \text{d}t \, I(t) \approx \frac{b - a}{N} \sum_{j=1}^N I(t_j). \end{equation} This may seem like an odd thing to do, but you'd expect it to work given enough points: it's saying that the average value of the integrand multiplied by the width of the interval is the integral. A quick implementation note. Generating (pseudo)-random numbers is hard. You should never do this yourself, but should always use a library. Thankfully, there's lots of good libraries out there. Let's repeat our convergence test of the one dimension case first. End of explanation """ k = numpy.arange(4, 25) Nsamples = 2**k error_mc_3d = numpy.zeros(len(Nsamples)) H_exact = 10.464842515116615 for i, N in enumerate(Nsamples): t = rand(N) * 24 x = rand(N) y = rand(N) It = I_spatial(x,y,t) H_mc = 24 * numpy.sum(It) / N error_mc[i] = abs(H_exact-H_mc) pyplot.figure(figsize=(10,6)) pyplot.loglog(Nsamples, error_mc, 'bo', label="Monte Carlo") pyplot.loglog(Nsamples, error_mc[-1]/Nsamples[-1]**(-1/2)*Nsamples**(-1/2), 'b--', label=r"$\propto N^{-1/2}$") pyplot.xlabel(r"$N$") pyplot.ylabel("Error") pyplot.legend(loc="lower left") pyplot.show() """ Explanation: This shows a number of interesting, but not particularly nice, features. First, you can see the randomness: every time you run this you get a different answer. Second, the convergence is slow: the Riemann integral, our worst method so far, converged as $(\Delta t) \propto N^{-1}$, so to make the error go down by a factor of two you increase the number of points by two. For the Monte Carlo integral, to make the error go down by a factor of two you increase the number of points by a factor of four. So why should we care? The answer is multiple dimensions: the convergence rate of Monte Carlo is independent of the number of dimensions. For our 3d case above, to make the error of the Riemann integral go down by two you increase the number of points by a factor of eight: twice as many as in the Monte Carlo method. As the number of dimensions increases, all the methods get steadily less practical, except Monte Carlo. We can check this with our 3d case above. End of explanation """ def P(x, y, t, theta, eta1, eta2): return I_spatial(x, y, t)*(1.0 + numpy.sin(2*numpy.pi*theta)**2*(1.0 + eta1 / 10.0)*(1.0 + eta2 / 5.0)) def Theta(x, y, t, theta, eta1, eta2): return P(x, y, t, theta, eta1, eta2) > 2.0 def Volume(x, y, t, theta, eta1, eta2): return numpy.ones_like(x) k = numpy.arange(4, 25) Nsamples = 2**k probabilities = numpy.zeros(len(Nsamples)) for i, N in enumerate(Nsamples): t = rand(N)*24 theta = rand(N)*numpy.pi x = rand(N) y = rand(N) eta1 = rand(N) eta2 = rand(N) thetas = Theta(x, y, t, theta, eta1, eta2) volumes = Volume(x, y, t, theta, eta1, eta2) probabilities[i] = numpy.sum(thetas) / numpy.sum(volumes) # The phase space volume cancels here, as does the number of samples probabilities """ Explanation: When we did this with the Riemann integral we used $32^3 \approx 3 \times 10^4$ points and got an error or $\approx 0.7$: for that number of points with the Monte Carlo method we've gained nearly an order of magnitude of accuracy. As the number of dimensions increases, the advantages of Monte Carlo increase alongside. Exercise The power absorbed by a PV cell depends on the available irradiation, its angle of incidence, and the efficiency of its components. Suppose that this is given by \begin{equation} P(x, y, t, \theta, \eta_1, \eta_2) = I(x, y, t) \left( 1 + \sin^2 \left( 2 \pi \theta \right) \right) \left( 1 + \frac{\eta_1}{10} \right) \left( 1 + \frac{\eta_2}{5} \right), \end{equation} where the irradiation is again given by \begin{equation} I(x, y, t) = \left( 1 + \cos^2 \left( 2 \pi x \right) \sin^4 \left( 4 \pi y \right) \right) \sin^4 \left( \frac{2 \pi t}{47} \right). \end{equation} The range of values that each parameter can take is $x, y, \eta_1, \eta_2 \in [0, 1], t \in [0, 24], \theta \in [0, \pi]$. Find the probability that the power absorbed is greater than $2$, where \begin{equation} \mathbb{P} \left( P > 2 \right) = \frac{ \int_0^{24} \text{d}t \int_0^{\pi} \text{d}\theta \int_0^1 \text{d}\eta_1 \int_0^1 \text{d}\eta_2 \int_0^1 \text{d}x \int_0^1 \text{d}y \,\, \Theta \left[ P \left( x, y, t, \theta, \eta_1, \eta_2 \right) - 2 \right]}{\int_0^{24} \text{d}t \int_0^{\pi} \text{d}\theta \int_0^1 \text{d}\eta_1 \int_0^1 \text{d}\eta_2 \int_0^1 \text{d}x \int_0^1 \text{d}y} \end{equation} with $\Theta(s)$ being the Heaviside function \begin{equation} \Theta(s) = \begin{cases} 1 & s > 0 \ 0 & s < 0. \end{cases} \end{equation} Solution End of explanation """
vadim-ivlev/STUDY
handson-data-science-python/DataScience-Python3/ConditionalProbabilityExercise.ipynb
mit
from numpy import random random.seed(0) totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} totalPurchases = 0 for _ in range(100000): ageDecade = random.choice([20, 30, 40, 50, 60, 70]) purchaseProbability = float(ageDecade) / 100.0 totals[ageDecade] += 1 if (random.random() < purchaseProbability): totalPurchases += 1 purchases[ageDecade] += 1 totals purchases totalPurchases """ Explanation: Conditional Probability Activity & Exercise Below is some code to create some fake data on how much stuff people purchase given their age range. It generates 100,000 random "people" and randomly assigns them as being in their 20's, 30's, 40's, 50's, 60's, or 70's. It then assigns a lower probability for young people to buy stuff. In the end, we have two Python dictionaries: "totals" contains the total number of people in each age group. "purchases" contains the total number of things purchased by people in each age group. The grand total of purchases is in totalPurchases, and we know the total number of people is 100,000. Let's run it and have a look: End of explanation """ PEF = float(purchases[30]) / float(totals[30]) print('P(purchase | 30s): ' + str(PEF)) """ Explanation: Let's play with conditional probability. First let's compute P(E|F), where E is "purchase" and F is "you're in your 30's". The probability of someone in their 30's buying something is just the percentage of how many 30-year-olds bought something: End of explanation """ PF = float(totals[30]) / 100000.0 print("P(30's): " + str(PF)) """ Explanation: P(F) is just the probability of being 30 in this data set: End of explanation """ PE = float(totalPurchases) / 100000.0 print("P(Purchase):" + str(PE)) """ Explanation: And P(E) is the overall probability of buying something, regardless of your age: End of explanation """ print("P(30's)P(Purchase)" + str(PE * PF)) """ Explanation: If E and F were independent, then we would expect P(E | F) to be about the same as P(E). But they're not; PE is 0.45, and P(E|F) is 0.3. So, that tells us that E and F are dependent (which we know they are in this example.) What is P(E)P(F)? End of explanation """ print("P(30's, Purchase)" + str(float(purchases[30]) / 100000.0)) """ Explanation: P(E,F) is different from P(E|F). P(E,F) would be the probability of both being in your 30's and buying something, out of the total population - not just the population of people in their 30's: End of explanation """ print((purchases[30] / 100000.0) / PF) """ Explanation: P(E,F) = P(E)P(F), and they are pretty close in this example. But because E and F are actually dependent on each other, and the randomness of the data we're working with, it's not quite the same. We can also check that P(E|F) = P(E,F)/P(F) and sure enough, it is: End of explanation """
AntonelliLab/seqcap_processor
docs/notebook/subdocs/align_contigs.ipynb
mit
%%bash source activate secapr_env secapr align_sequences -h """ Explanation: Align contigs We can use SECAPR to produce Multiple Sequence Alignments (MSAs) from the contig data. The alignment function align_sequences looks as follows: End of explanation """ from IPython.display import Image, display img1 = Image("../../data/processed/plots/contig_alignment_yield_overview.png",width=1000) display(img1) """ Explanation: Let's now create alignment from our contig data. In the example command below we added the flag --no-trim, which avoids the algorithm to cut the alignment at the ends (= full contig sequence length is being preserved) and the flag --ambiguous, which allows sequences with ambiguous bases ('N') to be included into the alignments. You can decide to not use the --no-trim flag if you want all sequences in the alignments to be of the same length. In that case there are a bunch of additional flags (see above) that you can use to adjust the trimming process. secapr align_sequences --sequences ../../data/processed/target_contigs/extracted_target_contigs_all_samples.fasta --output ../../data/processed/alignments/contig_alignments/ --aligner mafft --output-format fasta --no-trim The align_sequences function by default creates multiple sequence alignments (MSAs) for all loci that are present in at least 3 samples. This leads to alignments for most of the targeted exons. We can use the SECAPR plotting function from the previous step again to create an overview showing which loci we have MSAs for. Therefore we need to provide the function the path to the target contigs as well as the path to the alignment folder. We can provide the same output folder as before, since the function will automatically name this plot differently than the previous one, so that it's not being overwritten. secapr plot_sequence_yield --extracted_contigs ../../data/processed/target_contigs --alignments ../../data/processed/alignments/contig_alignments/ --output ../../data/processed/plots End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/sandbox-2/land.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-2', 'land') """ Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: NCC Source ID: SANDBOX-2 Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:25 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation """
google/physics-math-tutorials
colabs/Multivariate Calculus for ML, 1 of 2.ipynb
apache-2.0
#@title Python imports import collections import datetime from functools import partial import math import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.ticker as ticker from scipy import stats import seaborn as sns from sklearn.datasets import make_regression from sklearn.model_selection import train_test_split # Make colab plots larger plt.rcParams['figure.figsize'] = [8, 6] plt.rcParams['figure.dpi'] = 100 """ Explanation: Copyright 2021 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Make a copy of this notebook! End of explanation """ import jax import jax.numpy as jnp from jax import jit, grad, vmap, api, random, jacfwd, jacrev from jax.experimental import optimizers, stax """ Explanation: Section 1 In this section we'll cover: Using Jax Computing derivatives with Jax Gradient Descent (single variable) Newton's Method (single variable) Jax Autodiff cookbook Jax is a python library that provides the ability to differentiate many python functions. End of explanation """ def square(x): return x * x # Compute the derivative with grad from Jax dsquare = grad(square) # Plot the function and the derivative domain = np.arange(0, 2.5, 0.1) plt.plot(domain, square(domain), label="$y=x^2$") plt.plot(domain, list(map(dsquare, domain)), label="$y'=\\frac{dy}{dx} = 2x$") # Note can also use JAX's vmap: vmap(dsquare)(domain) instead of list(map(dsquare, domain)) plt.legend() plt.show() """ Explanation: Basic autodiff example Let's take the derivative of $y = f(x) = x^2$, which we know to be $f'(x) = 2x$. End of explanation """ ## Your code here # Compute the sigmoid def sigmoid(x): pass # Compute the derivative (using grad) # Compare derivative values to f(x) * (1 - f(x)) # @title Solution (double-click to show) # Compute the sigmoid def sigmoid(x): return 1. / (1. + jnp.exp(-x)) # Compute the derivative (using grad) deriv = grad(sigmoid) # Compare derivative values to f(x) * (1 - f(x)) xs = np.arange(-3, 3.1, 0.1) ys = [] for x in xs: ys.append(deriv(x) - sigmoid(x) * (1. - sigmoid(x))) plt.scatter(xs, ys) plt.title("Differences between Jax derivative and $f(x)(1-f(x))$\nNote the scale modifier in the upper left") plt.show() """ Explanation: Exercise Write a function to compute the sigmoid $$f(x) = \frac{1}{1 + e^{-x}}$$ using jnp.exp to compute $e^x$ -- need to use the Jax version of numpy for autodiff to work. Then verify that $$ f'(x) = f(x) (1 - f(x))$$ For example, compare some explicit values and plot the differences between the derivative and $f(x) (1 - f(x))$. End of explanation """ def f(x): return (x**2 - x)**2 xs = np.arange(-1, 2.05, 0.05) ys = f(xs) plt.plot(xs, ys) plt.show() """ Explanation: Single variable gradient descent examples In this example we solve $x^2=x$, which we know has two solutions. Different initial starting points yield convergence to different solutions, or non-convergence to either solution at $x_0 = 1/2$. We need to turn the problem into one of finding local extrema. So we consider the function $f(x) = \left( x^2 - x\right)^2$, which is differentiable and has local minimum at $x=0$ and $x=1$. We square so that the points where $x^2=x$ are minima instead of zeros of a function like $g(x) = x^2 - x$. Notice in the plot below that the function also has a local maximum at $x=1/2$, which is centered between the two solutions. Intuitively, gradient descent starting at $x_0=1/2$ will not move because there's no reason to favor either local minumum. Let's plot the function first: End of explanation """ def gradient_descent(dfunc, x0, iterations=100, alpha=0.1): """dfunc is the derivative of the function on which we perform descent.""" xs = [x0] for i in range(iterations): x = xs[-1] x = x - alpha * dfunc(float(x)) xs.append(x) return xs # Let's try it on our function f now. # Compute the derivative df = grad(f) # Try some different starting points. for x0 in [0.25, 0.5, 0.50001, 0.85]: xs = gradient_descent(df, x0, iterations=30, alpha=1.) plt.plot(range(len(xs)), xs, label=x0) plt.xlabel("iterations") plt.legend() plt.show() """ Explanation: Let's define a function to compute iterations of gradient descent. $$\begin{eqnarray} x_{n+1} &=& x_n - \alpha f'(x_n) \ \end{eqnarray}$$ End of explanation """ ## Your code here """ Explanation: Check your understanding: Explain what happened with each of the four curves in the plot. Exercise: What happens if we decrease the learning rate $\alpha$? Recreate the plot above using $\alpha=0.1$ instead. End of explanation """ def f2(x): return (x*x - 3)**2 df = grad(f2) x0 = 2 for alpha in [0.08, 0.01]: xs = gradient_descent(df, x0, iterations=40, alpha=alpha) plt.plot(range(len(xs)), xs, label="$\\alpha = {}$".format(alpha)) plt.xlabel("iterations") # Plot the correct value sqrt3 = math.pow(3, 0.5) n = len(xs) plt.plot(range(n), [sqrt3]*n, label="$\\sqrt{3}$", linestyle="--") plt.legend() plt.show() print("Sqrt(3) =", sqrt3) """ Explanation: Example 2 In this example we use gradient descent to approximate $\sqrt{3}$. We use the function $f(x) = \left(x^2 - 3\right)^2$ and construct a sequence converging to the positive solution. In this case notice the impact of the learning rate both on the time to convergence and whether the convergence is monotonic or oscillitory. Larger learning rates such as $\alpha=1$ can cause divergence. End of explanation """ def f3(x): """Define the function f(x) = (x - e^(-x))^2.""" ## Your code here pass # Compute the gradient # Initial guess x0 = 0.4 ## Add code here for gradient descent using the functions above ## Plot the gradient descent values #@title Solution (double click to expand) def f3(x): """Define the function f(x) = (x - e^(-x))^2.""" return (x - jnp.exp(-x))**2 # Compute the gradient df = grad(f3) # Initial guess x0 = 0.4 for alpha in [0.01, 0.1]: xs = gradient_descent(df, x0, iterations=50, alpha=alpha) plt.plot(range(len(xs)), xs, label="$\\alpha = {}$".format(alpha)) plt.xlabel("iterations") plt.legend() plt.show() print("Final iteration:", xs[-1]) """ Explanation: Exercise Solve the equation $x = e^{-x}$, which does not have an easily obtainable solution algebraically. The solution is approximately $x = 0.567$. Again note the impact of the learning rate. Use jnp.exp for the exponential function. End of explanation """ def newtons_method(func, x0, iterations=100, alpha=1.): dfunc = grad(func) xs = [x0] for i in range(iterations): x = xs[-1] x = x - alpha * func(x) / dfunc(float(x)) xs.append(x) return xs """ Explanation: Newton's method, single variable We can use Newton's method to find zeros of functions. Since local extrema occur at zeros of the derivative, we can apply Newton's method to the first derivative to obtain a second-order alternative to gradient descent. $$\begin{eqnarray} x_{n+1} &=& x_n - \alpha \frac{f'(x_n)}{f''(x_n)} \ \end{eqnarray}$$ End of explanation """ def f(x): return (x**2 - 3)**2 # Let's make a function we can reuse def compare_gradient_newton(func, x0, alpha=0.01, iterations=50): # Compute the first and second derivatives df = grad(func) # Compute Newton's method iterations xs = newtons_method(df, x0, alpha=alpha, iterations=iterations) # Compute gradient descent with same alpha xs2 = gradient_descent(df, x0, alpha=alpha, iterations=iterations) # Plot it all plt.plot(range(len(xs2)), xs2, label="Gradient Descent") plt.plot(range(len(xs)), xs, label="Newton's method") plt.xlabel("iterations") plt.legend() plt.title("$\\alpha = {}$".format(alpha)) # Plot the solution sqrt3 = math.pow(3, 0.5) n = len(xs) plt.plot(range(n), [sqrt3]*n, label="$\\sqrt{3}$", linestyle="--") plt.show() compare_gradient_newton(f, 2., alpha=0.01, iterations=50) """ Explanation: Let's repeat the example of finding the value of $\sqrt{3}$ with Newton's method and compare to gradient descent. For small $\alpha$, gradient descent seems to perform better: End of explanation """ compare_gradient_newton(f, 2., alpha=0.1, iterations=50) """ Explanation: But for larger $\alpha$, Newton's method is better behaved and gradient descent fails to converge. End of explanation """ def f(x): return x**2 - 3 xs = newtons_method(f, 2., alpha=0.5, iterations=10) plt.plot(range(len(xs)), xs, label="$\\alpha = {}$".format(alpha)) plt.xlabel("iterations") # Plot the solution sqrt3 = math.pow(3, 0.5) n = len(xs) plt.plot(range(n), [sqrt3]*n, label="$\\sqrt{3}$", linestyle="--") print(xs[-1]) """ Explanation: In this case we can also apply Newton's method with just the first derivative to find a zero of $x^2 - 3$, i.e. we don't have to look for a minimum of $(x^2 - 3)^2$ since Newton's method can also find zeros of functions. End of explanation """ def f(x, y): return x * y * y # Compute the partial derivatives with grad from Jax # Use float as arguments, else Jax will complain print("f(3, 1)=", f(3., 1.)) # argnums allows us to specify which variable to take the derivative of, positionally print("Partial x derivative at (3, 1):", grad(f, argnums=0)(3., 1.)) print("Partial y derivative at (3, 1):", grad(f, argnums=1)(3., 1.)) # We can get both partials at the same time print("Gradient vector at (3, 1):", grad(f, (0, 1))(3., 1.)) g = [float(z) for z in grad(f, (0, 1))(3., 1.)] print("Gradient vector at (3, 1):", g) """ Explanation: Section 2 Now we'll look at multivariate derivatives and gradient descent, again using Jax. Multivariate derivatives $$f(x, y) = x y^2$$ $$ \nabla f = [y^2, 2 x y]$$ End of explanation """ def f(x, y): return x * x + y * y partial_x = grad(f, argnums=0) partial_y = grad(f, argnums=1) xs = np.arange(-1, 1.25, 0.25) ys = np.arange(-1, 1.25, 0.25) plt.clf() # Compute and plot the gradient vectors for x in xs: for y in ys: u = partial_x(x, y) v = partial_y(x, y) plt.arrow(x, y, u, v, length_includes_head=True, head_width=0.1) plt.xlim(-3, 3) plt.ylim(-3, 3) plt.show() """ Explanation: We can plot some of the vectors of a gradient. Let's consider $$f(x, y) = x^2 + y^2$$ The partial derivatives are $$\frac{\partial f}{\partial x} = 2x$$ $$\frac{\partial f}{\partial y} = 2y$$ So the gradient is $$\nabla f = [2x, 2y]^T$$ End of explanation """ n = 4 def f(x): return sum(x) test_point = [1., 2., 3., 4.] print("x = ", test_point) print("Gradient(x):", [float(x) for x in grad(f)(test_point)]) ## Try other test points, even random ones: test_point = np.random.rand(4) print() print("x = ", test_point) print("Gradient(x):", [float(x) for x in grad(f)(test_point)]) """ Explanation: Jacobians and Hessians with Jax Let's verify some of the example from the slides. Let $f(x) = \sum_i{x_i} = x_1 + \cdots + x_n$. Then the gradient is $\nabla f (x) = [1, \ldots, 1]^T.$ End of explanation """ # @title Solution (double click to show) def sum_squares(x): return jnp.dot(x, x) test_point = np.array([1., 2., 3., 4.]) print("x = ", test_point) print("Gradient(x):", [float(x) for x in grad(sum_squares)(test_point)]) """ Explanation: Exercise Compute the gradient of the function that sums the squares of the elements of a vector. End of explanation """ A = np.array([[1., 2.], [3., 4.]]) def f(x): return jnp.dot(A, x) x = np.array([1., 1.]) jacfwd(f)(x) # Try some other matrices A and a 3x3 matrix # Note that Jax handles larger matrices with the same code # But you'll need a length 3 vector for x """ Explanation: Let's try a Jacobian now. In the slides we saw that Jacobian of $f(\mathbf{x}) = Ax$ is $A$. Let's verify with Jax. End of explanation """ A = np.array([[1., 2.], [3., 4.]]) def f(x): return jnp.dot(A, x) def hessian(f): return jacfwd(jacrev(f)) x = np.array([1., 1.]) hessian(f)(x) # Try some other matrices A and a 3x3 matrix """ Explanation: We can compute the Hessian by taking the Jacobian twice. In this case, $f(\mathbf{x}) = Ax$ is a linear function, so the second derivatives should all be zero. End of explanation """ # Your code here def f(x): """Compute x . A x""" pass # Compute the first and second derivatives at a test point x #@title Solution (double-click to show) A = np.array([[1., 0.], [0., 3.]]) def f(x): return jnp.dot(x, jnp.dot(A, x)) x = np.array([2., -1.]) print(grad(f)(x)) print(hessian(f)(x)) """ Explanation: Exercise Now try to take derivatives of the function $f(\mathbf{x}) = x \cdot A x$. Hints: * Is it scalar or vector-valued? * Does it matter if $A$ is symmetric $(A = A^T)$ or anti-symmetric $(A = -A^T)$? End of explanation """ ## Your code here def entropy(x): pass #@title Solution (double-click to show) def entropy(x): return - sum(a * jnp.log(a) for a in x) x = np.array([1./2, 1./2]) print(entropy(x)) print(grad(entropy)(x)) print(hessian(entropy)(x)) """ Explanation: Exercise: Entropy Compute the Gradient and Hessian of the Shannon entropy: $$S = -\sum_{i}{x_i \log x_i}$$ Note that for a test point you'll need to have all the elements positive and summing to 1, so a good choice is $$[1 / n, \ldots, 1 / n]$$ End of explanation """ ## Adapted from JAX docs: https://coax.readthedocs.io/en/latest/examples/linear_regression/jax.html # Generate some data using sklearn X, y = make_regression(n_features=1, noise=10) X, X_test, y, y_test = train_test_split(X, y) # Plot the data plt.scatter([x[0] for x in X], y) plt.title("Randomly generated dataset") plt.show() """ Explanation: Example: Linear Regression with Jax Given some data of the form $(x_i, y_i)$, let's find a best fit line $$ y = m x + b $$ by minimizing the sum of squared errors. $$ S = \sum_{i}{\left(y_i - (m x_i + b) \right)^2}$$ End of explanation """ # In JAX, we can specify our parameters as various kinds of Python objects, # including dictionaries. # Initial model parameters params = { 'w': jnp.zeros(X.shape[1:]), 'b': 0. } # The model function itself, a linear function. def forward(params, X): """y = w x + b""" return jnp.dot(X, params['w']) + params['b'] # The loss function we want to minimize, the sum of squared errors # of the model prediction versus the true values def sse(params, X, y): """Sum of squared errors (mean)""" err = forward(params, X) - y return jnp.mean(jnp.square(err)) # Function to update our parameters in each step of gradient descent def update(params, grads, alpha=0.1): return jax.tree_multimap(lambda p, g: p - alpha * g, params, grads) # We'll define a gradient descent function similarly to as before, # and we'll track the loss function values for plotting # Note also that we compute our gradients on the training data X and y # but our loss function on the test data X_test and y_test def gradient_descent(f, params, X, X_test, y, y_test, alpha=0.1, iterations=30): """ Apply gradient descent to the function f with starting point x_0 and learning rate \alpha. x_{n+1} = x_n - \alpha d_f(x_n) """ grad_fn = grad(f) params_ = [] losses = [] for _ in range(iterations): grads = grad_fn(params, X, y) params = update(params, grads, alpha) params_.append(params) loss = f(params, X_test, y_test) losses.append(loss) return params_, losses # Function to plot our residuals to see how the loss function progresses def plot_residuals(params, model_fn, X, y, color='blue'): res = y - model_fn(params, X) plt.hist(res, bins=10, color=color, alpha=0.5) # Find the best fit line fit_params, losses = gradient_descent(sse, params, X, X_test, y, y_test) # Plot the decrease in the loss function over iterations. plt.plot(range(len(losses)), losses) plt.ylabel("SSE") plt.xlabel("Iteration") plt.title("Loss evolution\nMinimizing sum of squared errors") plt.show() # Compare the errors of our initial guess model with the final model # Note the differences in scales plot_residuals(params, forward, X, y) plt.title("Histogram of initial residuals (errors for each point)") plt.show() plot_residuals(fit_params[-1], forward, X, y, color='green') plt.title("Histogram of final residuals (errors for each point)") plt.show() # Let's plot the best fit line # Plot the data xs = [x[0] for x in X] plt.scatter(xs, y) plt.title("Best fit line") # Plot the best fit line params = fit_params[-1] xs = np.arange(min(xs), max(xs), 0.1) m = float(params['w']) b = float(params['b']) ys = [m * x + b for x in xs] plt.plot(xs, ys, color='black') plt.show() """ Explanation: Read through the following code, which minimizes the sum of squared errors for a linear model. End of explanation """ ## Minimize MAE instead of SSE # MAE is the p=1 case of this function. def lp_norm(p=2): def norm(params, X, y): err = forward(params, X) - y return jnp.linalg.norm(err, ord=p) return norm # Generate some noisier data X, y = make_regression(n_features=1, noise=100, bias=5) X, X_test, y, y_test = train_test_split(X, y) fit_params, losses = gradient_descent(lp_norm(p=1.), params, X, X_test, y, y_test) # Plot the data xs = [x[0] for x in X] plt.scatter(xs, y) plt.title("Best fit line") # Plot the best fit line params = fit_params[-1] xs = np.arange(min(xs), max(xs), 0.1) m = float(params['w']) b = float(params['b']) ys = [m*x+b for x in xs] plt.plot(xs, ys, color='black', label="MAE") # Compare to SEE best fit line # Find the best fit line fit_params, losses = gradient_descent(sse, params, X, X_test, y, y_test) # Plot the best fit line params = fit_params[-1] xs = np.arange(min(xs), max(xs), 0.1) m = float(params['w']) b = float(params['b']) ys = [m*x+b for x in xs] plt.plot(xs, ys, color='green', label="SSE") plt.legend() plt.show() """ Explanation: We can easily use another loss function, like the mean absolute error, where use the absolute value of residuals instead of the square, which reduces the impact of outliers. $$ S = \sum_{i}{\left|y_i - (m x_i + b) \right|}$$ This will give us a different best fit line for some data sets. End of explanation """
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
hardware_configure/Pyaudio_Test.ipynb
bsd-2-clause
Audio('c_major.wav') """ Explanation: Playback Using the Notebook Audio Widget This interface is used often when developing algorithms that involve processing signal samples that result in audible sounds. You will see this in the tutorial. Processing is done before hand as an analysis task, then the samples are written to a .wav file for playback using the PC audio system. End of explanation """ fs,x = sigsys.from_wav('c_major.wav') """ Explanation: Below I import the .wav file so I can work with the signal samples: End of explanation """ specgram(x,NFFT=2**13,Fs=fs); ylim([0,1000]) title(r'Visualize the 3 Pitches of a C-Major Chord') xlabel(r'Time (s)') ylabel(r'Frequency (Hz)'); """ Explanation: Here I visualize the C-major chord using the spectrogram to see the chord as being composed or the fundamental or root plus two third and fifth harmonics or overtones. End of explanation """ import pyaudio import wave import time import sys """PyAudio Example: Play a wave file (callback version)""" wf = wave.open('Music_Test.wav', 'rb') #wf = wave.open('c_major.wav', 'rb') print('Sample width in bits: %d' % (8*wf.getsampwidth(),)) print('Number of channels: %d' % wf.getnchannels()) print('Sampling rate: %1.1f sps' % wf.getframerate()) p = pyaudio.PyAudio() def callback(in_data, frame_count, time_info, status): data = wf.readframes(frame_count) #In general do some processing before returning data #Here the data is in signed integer format #In Python it is more comfortable to work with float (float64) return (data, pyaudio.paContinue) stream = p.open(format=p.get_format_from_width(wf.getsampwidth()), channels=wf.getnchannels(), rate=wf.getframerate(), output=True, stream_callback=callback) stream.start_stream() while stream.is_active(): time.sleep(0.1) stream.stop_stream() stream.close() wf.close() p.terminate() """ Explanation: Using Pyaudio: Callback with a wav File Source With Pyaudio you set up a real-time interface between the audio source, a processing algorithm in Python, and a playback means. In the test case below the wave file is read into memory then played back frame-by-frame using a callback function. In this case the signals samples read from memory, or perhaps a buffer, are passed directly to the audio interface. In general processing algorithms may be implemented that operate on each frame. We will explore this in the tutorial. End of explanation """
davebshow/DH3501
graph_dbs.ipynb
mit
%matplotlib inline %load_ext gremlin import asyncio import aiogremlin import networkx as nx """ Explanation: Graph Databases and the Humanities End of explanation """ g = nx.scale_free_graph(10) nx.draw_networkx(g) """ Explanation: What's a graph? A binary mathematical structure consisting of nodes and edges: $g = \begin{bmatrix}0 & 1\1 & 0\end{bmatrix}$ End of explanation """ @asyncio.coroutine def stream(gc): results = [] resp = yield from gc.submit("x + x", bindings={"x": 1}) while True: result = yield from resp.stream.read() if result is None: break results.append(result) return results loop = asyncio.get_event_loop() gc = aiogremlin.GremlinClient() results = loop.run_until_complete(stream(gc)) results loop.run_until_complete(gc.close()) # Explicitly close client!!! """ Explanation: Graphs are everywhere these days! Facebook Twitter LinkedIn <img src="img/linkedin.jpg" style="width: 800px; float: left;" /><br> But wait...these graphs are more than ones and zeros... <img style="float: left" src="http://i.guim.co.uk/static/w-620/h--/q-95/sys-images/Film/Pix/pictures/2009/5/8/1241793515016/Keanu-Reeves-in-Bill-and--001.jpg" /> Property graph model <img src="img/property_graph.jpg" style="width: 800px; float: left;" /><br> Why graphs? Graphs are very good at representing complex interrelations between entities... <img src="img/flavors.png" style="width: 800px; float: left;" /><br> Ahn, Y. Y., Ahnert, S. E., Bagrow, J. P., & Barabási, A. L. (2011). Flavor network and the principles of food pairing. Scientific reports, 1. <img src="img/cultural.png" style="width: 800px; float: left;" /> Schich, M., Song, C., Ahn, Y. Y., Mirsky, A., Martino, M., Barabási, A. L., & Helbing, D. (2014). A network framework of cultural history. Science, 345(6196), 558-562. The CulturePlex Lab: Our research The production and diffusion of cultural objects. Towards a Digital Geography of Hispanic Baroque Art <img src="img/dig_geo.png" style="width: 800px; float: left;" /> The Art Space of a Global Community <img src="img/global.png" style="width: 800px; float: left;" /> Why GraphDBs? Relational databases: Inflexible Bad at relationships Lacking in semantic richness Neo4j <img src="img/neo4jlogo.png" style="width: 800px; float: left;" /> Neo4jrestclient by versae - 58977 downloads SylvaDB SylvaDB <img src="img/sylva.png" style="width: 800px; float: left;" /> Landscapes of Castas Painting - Masters Thesis/DH2014 Preliminaries Project - DH2013/Congress 2015 Interested in SylvaDB? Check out Javier de la Rosa's talk tomorrow at 11:00 in Colonel By E015 Interested in the Preliminaries Project? Check out my talk on Wednesday at 1:15 in Colonel By C03 projx Preliminaries Projections required a wide variety of schema transformations and projections. A tedious task to be sure. Enter projx - a graph transformation library written in Python with a Cypher based DSL python subgraph = projection.execute(""" MATCH (p1:Person)-(wild)-(p2:Person) PROJECT (p1)-(p2) METHOD NEWMAN Institution, City SET label = wild.label DELETE wild """) aiogremlin Tinkerpop/Gremlin Ecosystem A standard API for graph databases Gremlin traversal language Tinkerpop enabled backends: Titan Neo4j Gremlin-Elastic Hadoop (Spark/Giraph) All accessed using the Gremlin Server <img style="float:left; width: 500px" src="img/gremlin-server.png"> End of explanation """ %%gremlin graph = TinkerFactory.createModern() g = graph.traversal(standard()) g.V().has('name','marko').out('knows').values('name') """ Explanation: ipython-gremlin End of explanation """
vanheck/blog-notes
QuantTrading/creating_trading_strategy_02-backtest.ipynb
mit
NB_VERSION = 1,0 import sys import datetime import numpy as np import pandas as pd print('Verze notebooku:', '.'.join(map(str, NB_VERSION))) print('Verze pythonu:', '.'.join(map(str, sys.version_info[0:3]))) print('---') import pandas_datareader as pdr import pandas_datareader.data as pdr_web from matplotlib import __version__ as matplotlib_version print('NumPy:', np.__version__) print('Pandas:', pd.__version__) print('pandas-datareader:', pdr.__version__) print('Matplotlib:', matplotlib_version) """ Explanation: Tvorba obchodní strategie - backtest Informace o notebooku a modulech End of explanation """ start_date = datetime.datetime(2008, 1, 1) end_date = datetime.datetime.now() short_period = 30 long_period = 90 ohlc_data = pdr_web.DataReader("AAPL", 'google', start=start_date, end=end_date) signals = pd.DataFrame(index=ohlc_data.index) signals['signal'] = 0.0 signals['short_sma'] = ohlc_data['Close'].rolling(window=short_period, min_periods=1, center=False).mean() signals['long_sma'] = ohlc_data['Close'].rolling(window=long_period, min_periods=1, center=False).mean() signals['signal'][short_period:] = np.where(signals['short_sma'][short_period:] > signals['long_sma'][short_period:], 1.0, 0.0) signals['positions'] = signals['signal'].diff() signals.iloc[73:78] # pozice +1.0 signals.iloc[143:147] # pozice -1.0 """ Explanation: Základní komponenty pro automatický obchodní systém Data handler, který funguje jako prostředík, který se stará o data Strategy - strategie, která generuje nákupní signály dle jednotlivých dat z data handleru Portfolio, které generuje příkazy a spravuje zisk nebo ztrátu (známé jako PnL, od Profit & Loss) Execution handler, který odesílá příkazy brokerovi a zpracovává odpovědi, které získá jako odpověď (tzv. fills) Prvními dvěma body jsem se zabýval v minulých příspěvcích. Teď je na řadě Portfolio. Tento článek se bude zabývat jak si sestavím jednoduché portfolio a zároveň ho pustím na historických datech a získám tzv. backtest. Backtest Backtest je testování strategie na relevatních historických datech. Backtest mi může ve velmi krátkém čase poskytnout informaci, zda je vhodné se dál zaměřit na danou myšlenku obchodní strategie a více ji rozvíjet, a nebo se tímto stylem dál vůbec zabývat. 1. Získání dat a signálů strategie Získám data akciového indexu společnosti Apple a vytvořím obchodní signály. Celý kód jsem převzal z minulého příspěvku Tvorby obchodní strategie. End of explanation """ # Počáteční kapitál initial_capital= float(100000.0) print(f'{initial_capital} $') # Příprava DataFramu positions = pd.DataFrame(index=signals.index).fillna(0.0) positions.head(3) """ Explanation: 2. Nastavím počáteční kapitál A připravím DataFrame pro pozice, indexy na něm nastavím stejné, jako ve vygenerovaných signálech. End of explanation """ # definovaný objem lots = 100 # Nákup o objemu 100 positions['AAPL'] = lots * signals['signal'] """ Explanation: 3. Nasimuluji nákup 100 akcií daného indexu Definuji si svůj objem 100 akcií, které chci koupit při překřížení klouzavého průměru. Následně vynásobím tento objem se získanými signály. V positions['AAPL'] pak budu mít držený objem pro daný den. End of explanation """ # vizualizace nákupu positions.iloc[72:76] # signál = +1.0 """ Explanation: Pro názornost - nákup: 100 akcií nakoupím 18.4.2008 a budu je dále držet. End of explanation """ # vizualizace zpětného prodeje positions.iloc[143:147] # signál = 0.0 """ Explanation: Pro názornost - prodej: Následně 30.7.2008 svoji pozici odprodám a nebudu držet žádné akcie. End of explanation """ # Výpočet vývoje hodnoty nakoupených akcií portfolio = positions.multiply(ohlc_data['Close'], axis=0) portfolio.iloc[72:80] """ Explanation: 4. Vývoj hodnoty nakoupených akcií Apple Vynásobím drženou pozici akcií, zavírací cenou každého dne, abych získal, jakou hodnotu má moje investice na konci každého dne. Výsledek si uložím jako své portfolio. End of explanation """ # Získám velikost pozice pro nákupní/prodejní signál pro určitý den pos_diff = positions.diff() pos_diff.iloc[143:147] #pos_diff.iloc[72:76] """ Explanation: 5. Získání velikosti pozice pro nákupní signál Záporná hodnota je Sell a kladná Buy. End of explanation """ # Výpočet celkové aktuální hodnoty obchodovaného portfolia portfolio['holdings'] = (positions.multiply(ohlc_data['Close'], axis=0)).sum(axis=1) portfolio.iloc[72:76] """ Explanation: 6. Celková hodnota držených akcií V bodě 3 se jednalo o hodnotu pouze pro koupené akcie firmy Apple, zde se vytvoří celková hodnota pro držené všechny pozice v portfoliu - v tomto jednoduchém případě jde o stejnou hodnotu, protože nic jiného obchodovat nechci. End of explanation """ # Výpočet hodnoty zbývajícího kapitálu portfolio['cash'] = initial_capital - (pos_diff.multiply(ohlc_data['Close'], axis=0)).sum(axis=1).cumsum() portfolio.iloc[143:147] """ Explanation: 7. Výpočet zbývajícího/nezobchodovaného kapitálu End of explanation """ # Celková hodnota kapitálu portfolio['total'] = portfolio['cash'] + portfolio['holdings'] portfolio.iloc[143:147] """ Explanation: 8. Celková hodnota kapitálu Sečtu hodnotu neobchodovaného kapitálu s hodnotou nakoupených akcií a získám tak celkovou hodnotu kapitálu. End of explanation """ # Návratnost -> procentní změna kapitálu pro každý den portfolio['returns'] = portfolio['total'].pct_change() portfolio.iloc[143:147] """ Explanation: 9. Denní změna vývoje kapitálu v procentech End of explanation """ initial_capital= float(100000.0) positions = pd.DataFrame(index=signals.index).fillna(0.0) positions['AAPL'] = 100 * signals['signal'] portfolio = positions.multiply(ohlc_data['Close'], axis=0) pos_diff = positions.diff() portfolio['holdings'] = (positions.multiply(ohlc_data['Close'], axis=0)).sum(axis=1) portfolio['cash'] = initial_capital - (pos_diff.multiply(ohlc_data['Close'], axis=0)).sum(axis=1).cumsum() portfolio['total'] = portfolio['cash'] + portfolio['holdings'] portfolio['returns'] = portfolio['total'].pct_change() portfolio.tail() """ Explanation: Celý kód portfolia End of explanation """ import matplotlib.pyplot as plt fig = plt.figure(figsize=(15,7)) ax1 = fig.add_subplot(111, ylabel='Hodnota portfolia v $') # graf průběhu equity v USD portfolio['total'].plot(ax=ax1, lw=2.) # vložení vstupů do pozic ax1.plot(portfolio.loc[signals.positions == 1.0].index, portfolio.total[signals.positions == 1.0], '^', markersize=10, color='m') ax1.plot(portfolio.loc[signals.positions == -1.0].index, portfolio.total[signals.positions == -1.0], 'v', markersize=10, color='k') # zobrazení připraveného grafu plt.show() """ Explanation: Zobrazení výsledků v grafu End of explanation """
pagutierrez/tutorial-sklearn
notebooks-spanish/18-arboles_y_bosques.ipynb
cc0-1.0
%matplotlib widget import numpy as np import matplotlib.pyplot as plt """ Explanation: Árboles de decisión y bosques End of explanation """ from figures import make_dataset x, y = make_dataset() X = x.reshape(-1, 1) plt.figure() plt.xlabel('Característica X') plt.ylabel('Objetivo y') plt.scatter(X, y); from sklearn.tree import DecisionTreeRegressor reg = DecisionTreeRegressor(max_depth=5) reg.fit(X, y) X_fit = np.linspace(-3, 3, 1000).reshape((-1, 1)) y_fit_1 = reg.predict(X_fit) plt.figure() plt.plot(X_fit.ravel(), y_fit_1, color='blue', label="predicción") plt.plot(X.ravel(), y, '.k', label="datos de entrenamiento") plt.legend(loc="best"); """ Explanation: Ahora vamos a ver una serie de modelos basados en árboles de decisión. Los árboles de decisión son modelos muy intuitivos. Codifican una serie de decisiones del tipo "SI" "ENTONCES", de forma similar a cómo las personas tomamos decisiones. Sin embargo, qué pregunta hacer y cómo proceder a cada respuesta es lo que aprenden a partir de los datos. Por ejemplo, si quisiéramos crear una guía para identificar un animal que encontramos en la naturaleza, podríamos hacer una serie de preguntas: ¿El animal mide más o menos de un metro? más: ¿Tiene cuernos? Sí: ¿Son más largos de 10cm? No: ¿Tiene collar? menos: ¿Tiene dos piernas o cuatro? Dos: ¿Tiene alas? Cuatro: ¿Tiene una cola frondosa? Y así... Esta forma de hacer particiones binarias en base a preguntas es la esencia de los árboles de decisión. Una de las ventajas más importantes de los modelos basados en árboles es que requieren poco procesamiento de los datos. Pueden trabajar con variables de distintos tipos (continuas y discretas) y no les afecta la escala de las variables. Otro beneficio es que los modelos basados en árboles son "no paramétricos", lo que significa que no tienen un conjunto fijo de parámetros a aprender. En su lugar, un modelo de árbol puede ser más y más flexible, si le proporcionamos más datos. En otras palabras, el número de parámetros libres aumenta según aumentan los datos disponibles y no es un valor fijo, como pasa en los modelos lineales. Regresión con árboles de decisión Un árbol de decisión funciona de una forma más o menos similar a los predictores basados en el vecino más cercano. Se utiliza de la siguiente forma: End of explanation """ from sklearn.datasets import make_blobs from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from figures import plot_2d_separator X, y = make_blobs(centers=[[0, 0], [1, 1]], random_state=61526, n_samples=100) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) clf = DecisionTreeClassifier(max_depth=5) clf.fit(X_train, y_train) plt.figure() plot_2d_separator(clf, X, fill=True) plt.scatter(X_train[:, 0], X_train[:, 1], c=np.array(['b', 'r'])[y_train], s=60, alpha=.7, edgecolor='k') plt.scatter(X_test[:, 0], X_test[:, 1], c=np.array(['b', 'r'])[y_test], s=60, edgecolor='k'); """ Explanation: Un único árbol de decisión nos permite estimar la señal de una forma no paraḿetrica, pero está claro que tiene algunos problemas. En algunas regiones, el modelo muestra un alto sesgo e infra-aprende los datos (observa las regiones planas, donde no predecimos correctamente los datos), mientras que en otras el modelo muestra varianza muy alta y sobre aprende los datos (observa los picos pequeños de la superficie obtenida, guiados por puntos de entrenamiento "ruidosos"). Clasificación con árboles de decisión Los árboles de decisión para clasificación actúan de una forma muy similar, asignando todos los ejemplos de una hoja a la etiqueta mayoritaria en esa hoja: End of explanation """ # %matplotlib inline from figures import plot_tree_interactive plot_tree_interactive() """ Explanation: Hay varios parámetros que controla la complejidad de un árbol, pero uno que es bastante fácil de entender es la máxima profundidad. Esto limita hasta que nivel se puede afinar particionando el espacio, o, lo que es lo mismo, cuantas reglas del tipo "Si-Entonces" podemos preguntar antes de decidir la clase de un patrón. Es importante ajustar este parámetro de la mejor forma posible para árboles y modelos basados en árboles. El gráfico interactivo que encontramos a continuación muestra como se produce infra-ajuste y sobre-ajuste para este modelo. Tener un max_depth=1 es claramente un caso de infra-ajuste, mientras que profundidades de 7 u 8 claramente sobre-ajustan. La máxima profundidad para un árbol en este dataset es 8, ya que, a partir de ahí, todas las ramas tienen ejemplos de un única clase. Es decir, todas las ramas son puras. En el gráfico interactivo, las regiones a las que se les asignan colores azules o rojos indican que la clase predicha para ese región es una o la otra. El grado del color indica la probabilidad para esa clase (más oscuro, mayor probabilidad), mientras que las regiones amarillas tienen la misma probabilidad para las dos clases. Las probabilidades se asocian a la cantidad de ejemplos que hay de cada clase en la región evaluada. End of explanation """ from figures import plot_forest_interactive plot_forest_interactive() """ Explanation: Los árboles de decisión son rápidos de entrenar, fáciles de entender y suele llevar a modelos interpretables. Sin embargo, un solo árbol de decisión a veces tiende al sobre-aprendizaje. Jugando con el gráfico anterior, puedes ver como el modelo empieza a sobre-entrenar antes incluso de que consiga una buena separación de los datos. Por tanto, en la práctica, es más común combinar varios árboles para producir modelos que generalizan mejor. El método más común es el uso de bosques aleatorios y gradient boosted trees. Bosques aleatorios Los bosques aleatorios son simplemente conjuntos de varios árboles, que han sido construidos usando subconjuntos aleatorios diferentes de los datos (muestreados con reemplazamiento) y subconjuntos aleatorios distintos de características (sin reemplazamiento). Esto hace que los árboles sean distintos entre si, y que cada uno aprenda aspectos distintos de los datos. Al final, las predicciones se promedian, llegando a una predicción suavizada que tiende a sobre-entrenar menos. End of explanation """ from sklearn.model_selection import GridSearchCV from sklearn.datasets import load_digits from sklearn.ensemble import RandomForestClassifier digits = load_digits() X, y = digits.data, digits.target X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) rf = RandomForestClassifier(n_estimators=200) parameters = {'max_features':['sqrt', 'log2', 10], 'max_depth':[5, 7, 9]} clf_grid = GridSearchCV(rf, parameters, n_jobs=-1) clf_grid.fit(X_train, y_train) clf_grid.score(X_train, y_train) clf_grid.score(X_test, y_test) clf_grid.best_params_ """ Explanation: Elegir el estimador óptimo usando validación cruzada End of explanation """ from sklearn.ensemble import GradientBoostingRegressor clf = GradientBoostingRegressor(n_estimators=100, max_depth=5, learning_rate=.2) clf.fit(X_train, y_train) print(clf.score(X_train, y_train)) print(clf.score(X_test, y_test)) """ Explanation: Gradient Boosting Otro método útil tipo ensemble es el Boosting. En lugar de utilizar digamos 200 estimadores en paralelo, construimos uno por uno los 200 estimadores, de forma que cada uno refina los resultados del anterior. La idea es que aplicando un conjunto de modelos muy simples, se obtiene al final un modelo final mejor que los modelos individuales. End of explanation """ from sklearn.datasets import load_digits from sklearn.ensemble import GradientBoostingClassifier digits = load_digits() X_digits, y_digits = digits.data, digits.target # divide el dataset y aplica búsqueda grid """ Explanation: <div class="alert alert-success"> <b>Ejercicio: Validación cruzada para Gradient Boosting</b>: <ul> <li> Utiliza una búsqueda *grid* para optimizar los parámetros `learning_rate` y `max_depth` de un *Gradient Boosted Decision tree* para el dataset de los dígitos manuscritos. </li> </ul> </div> End of explanation """ X, y = X_digits[y_digits < 2], y_digits[y_digits < 2] rf = RandomForestClassifier(n_estimators=300, n_jobs=1) rf.fit(X, y) print(rf.feature_importances_) # un valor por característica plt.figure() plt.imshow(rf.feature_importances_.reshape(8, 8), cmap=plt.cm.viridis, interpolation='nearest') """ Explanation: Importancia de las características Las clases RandomForest y GradientBoosting tienen un atributo feature_importances_ una vez que han sido entrenados. Este atributo es muy importante e interesante. Básicamente, cuantifica la contribución de cada característica al rendimiento del árbol. End of explanation """
kimkipyo/dss_git_kkp
통계, 머신러닝 복습/160502월_1일차_분석 환경, 소개/13.pandas 패키지의 소개.ipynb
mit
s = pd.Series([4, 7, -5, 3]) s s.values type(s.values) s.index type(s.index) """ Explanation: pandas 패키지의 소개 pandas 패키지 Index를 가진 자료형인 R의 data.frame 자료형을 Python에서 구현 참고 자료 http://pandas.pydata.org/ http://pandas.pydata.org/pandas-docs/stable/10min.html http://pandas.pydata.org/pandas-docs/stable/tutorials.html pandas 자료형 Series 시계열 데이터 Index를 가지는 1차원 NumPy Array DataFrame 복수 필드 시계열 데이터 또는 테이블 데이터 Index를 가지는 2차원 NumPy Array Index Label: 각각의 Row/Column에 대한 이름 Name: 인덱스 자체에 대한 이름 <img src="https://docs.google.com/drawings/d/12FKb94RlpNp7hZNndpnLxmdMJn3FoLfGwkUAh33OmOw/pub?w=602&h=446" style="width:60%; margin:0 auto 0 auto;"> Series Row Index를 가지는 자료열 생성 추가/삭제 Indexing 명시적인 Index를 가지지 않는 Series End of explanation """ s * 2 np.exp(s) """ Explanation: Vectorized Operation End of explanation """ s2 = pd.Series([4, 7, -5, 3], index=["d", "b", "a", "c"]) s2 s2.index """ Explanation: 명시적인 Index를 가지는 Series 생성시 index 인수로 Index 지정 Index 원소는 각 데이터에 대한 key 역할을 하는 Label dict End of explanation """ s2['a'] s2['b':'c'] s2[["a", "b"]] """ Explanation: Series Indexing 1: Label Indexing Single Label Label Slicing 마지막 원소 포함 Label을 원소로 가지는 Label (Label을 사용한 List Fancy Indexing) 주어진 순서대로 재배열 End of explanation """ s2[2] s2[1:4] s2[[2, 1]] s2[s2 > 0] """ Explanation: Series Indexing 2: Integer Indexing Single Integer Integer Slicing 마지막 원소를 포함하지 않는 일반적인 Slicing Integer List Indexing (List Fancy Indexing) Boolearn Fancy Indexing End of explanation """ "a" in s2, "e" in s2 for i, j in s2.iteritems(): print(i, j) s2["d":"a"] """ Explanation: dict 연산 End of explanation """ sdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000} s3 = pd.Series(sdata) s3 states = ['Califonia', 'Ohio', 'Oregon', 'Texas'] s4 = pd.Series(sdata, index=states) s4 pd.isnull(s) pd.notnull(s4) s4.isnull() s4.notnull() """ Explanation: dict 데이터를 이용한 Series 생성 별도의 index를 지정하면 지정한 자료만으로 생성 End of explanation """ print(s3.values, s4.values) s3.values + s4.values s3 + s4 #Utah가 NaN인 것을 보아하니 값이 둘 다 있을 때만 연산이 되고 하나라도 없으면 NaN으로 처리되나보네 """ Explanation: Index 기준 연산 End of explanation """ s4 s4.name = "population" s4 s4.index.name = "state" s4 """ Explanation: Index 이름 End of explanation """ s s.index s.index = ['Bob', 'Steve', 'Jeff', 'Ryan'] s s.index """ Explanation: Index 변경 End of explanation """ data = { 'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], 'year': [2001, 2001, 2002, 2001, 2002], 'pop': [1.5, 1.7, 3.6, 2.4, 2.9] } df = pd.DataFrame(data) df pd.DataFrame(data, columns=['year', 'state', 'pop']) df.dtypes """ Explanation: DataFrame Multi-Series 동일한 Row 인덱스를 사용하는 복수 Series Series를 value로 가지는 dict 2차원 행렬 DataFrame을 행렬로 생각하면 각 Series는 행렬의 Column의 역할 NumPy Array와 차이점 각 Column(Series)마다 type이 달라도 된다. Column Index (Row) Index와 Column Index를 가진다. 각 Column(Series)에 Label 지정 가능 (Row) Index와 Column Label을 동시에 사용하여 자료 접근 가능 End of explanation """ df2 = pd.DataFrame(data, columns=['year', 'state', 'pop', 'debt'], index=['one', 'two', 'three', 'four', 'five']) df2 """ Explanation: 명시적인 Column/Row Index를 가지는 DataFrame End of explanation """ df["state"] type(df["state"]), type([df["state"]]) [df["state"]] df.state """ Explanation: Single Column Access End of explanation """ df2['debt'] = 16.5, 16.2, 16.3, 16.7, 16.2 df2 df2['debt'] = 16.5 df2 df2['debt'] = np.arange(5) df2 df2['debt'] = pd.DataFrame([-1.2, -1.5, -1.7], index=['two', 'four', 'five']) df2 """ Explanation: Cloumn Data Update End of explanation """ df2['eastern'] = df2.state == 'Ohio' df2 """ Explanation: Add Column End of explanation """ del df2["eastern"] df2 """ Explanation: Delete Column End of explanation """ x = [3, 6, 1, 4] sorted(x) x x.sort() x """ Explanation: inplace 옵션 함수/메소드는 두 가지 종류 그 객체 자체를 변형 해당 객체는 그대로 두고 변형된 새로운 객체를 출력 DataFrame 메소드 대부분은 inplace 옵션을 가짐 inplace=True이면 출력을 None으로 하고 객체 자체를 변형 inplace=False이면 객체 자체는 보존하고 변형된 새로운 객체를 출력 End of explanation """ s = pd.Series(np.arange(5.), index=['a', 'b', 'c', 'd', 'e']) s s2 = s.drop('c') s2 s s.drop(["b", "c"]) df = pd.DataFrame(np.arange(16).reshape((4, 4)), index=['Ohio', 'Colorado', 'Utah', 'New York'], columns=['one', 'two', 'three', 'four']) df df.drop(['Colorado', 'Ohio']) df.drop('two', axis=1) df.drop(['two', 'four'], axis=1) """ Explanation: drop 메소드를 사용한 Row/Column 삭제 del 함수 inplace 연산 drop 메소드 삭제된 Series/DataFrame 출력 Series는 Row 삭제 DataFrame은 axis 인수로 Row/Column 선택 axis=0(디폴트): Row axis=1: Column End of explanation """ pop = { 'Nevada': { 2001: 2.4, 2002: 2.9 }, 'Ohio': { 2000: 1.5, 2001: 1.7, 2002: 3.6 } } df3 = pd.DataFrame(pop) df3 """ Explanation: Nested dict를 사용한 DataFrame 생성 End of explanation """ pdata = { 'Ohio': df3['Ohio'][:-1], 'Nevada': df3['Nevada'][:3] } pd.DataFrame(pdata) """ Explanation: Series dict를 사용한 DataFrame 생성 End of explanation """ df3.values df2.values df3.values df2.values """ Explanation: NumPy array로 변환 End of explanation """ df2 df2["year"] df2.year df2[["state", "debt", "year"]] df2[["year"]] """ Explanation: DataFrame의 Column Indexing Single Label key Single Label attribute Label List Fancy Indexing End of explanation """
mne-tools/mne-tools.github.io
stable/_downloads/d12911920e4d160c9fd8c97cffdda6b7/time_frequency_erds.ipynb
bsd-3-clause
# Authors: Clemens Brunner <clemens.brunner@gmail.com> # Felix Klotzsche <klotzsche@cbs.mpg.de> # # License: BSD-3-Clause """ Explanation: Compute and visualize ERDS maps This example calculates and displays ERDS maps of event-related EEG data. ERDS (sometimes also written as ERD/ERS) is short for event-related desynchronization (ERD) and event-related synchronization (ERS) :footcite:PfurtschellerLopesdaSilva1999. Conceptually, ERD corresponds to a decrease in power in a specific frequency band relative to a baseline. Similarly, ERS corresponds to an increase in power. An ERDS map is a time/frequency representation of ERD/ERS over a range of frequencies :footcite:GraimannEtAl2002. ERDS maps are also known as ERSP (event-related spectral perturbation) :footcite:Makeig1993. In this example, we use an EEG BCI data set containing two different motor imagery tasks (imagined hand and feet movement). Our goal is to generate ERDS maps for each of the two tasks. First, we load the data and create epochs of 5s length. The data set contains multiple channels, but we will only consider C3, Cz, and C4. We compute maps containing frequencies ranging from 2 to 35Hz. We map ERD to red color and ERS to blue color, which is customary in many ERDS publications. Finally, we perform cluster-based permutation tests to estimate significant ERDS values (corrected for multiple comparisons within channels). End of explanation """ import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import TwoSlopeNorm import pandas as pd import seaborn as sns import mne from mne.datasets import eegbci from mne.io import concatenate_raws, read_raw_edf from mne.time_frequency import tfr_multitaper from mne.stats import permutation_cluster_1samp_test as pcluster_test """ Explanation: As usual, we import everything we need. End of explanation """ fnames = eegbci.load_data(subject=1, runs=(6, 10, 14)) raw = concatenate_raws([read_raw_edf(f, preload=True) for f in fnames]) raw.rename_channels(lambda x: x.strip('.')) # remove dots from channel names events, _ = mne.events_from_annotations(raw, event_id=dict(T1=2, T2=3)) """ Explanation: First, we load and preprocess the data. We use runs 6, 10, and 14 from subject 1 (these runs contains hand and feet motor imagery). End of explanation """ tmin, tmax = -1, 4 event_ids = dict(hands=2, feet=3) # map event IDs to tasks epochs = mne.Epochs(raw, events, event_ids, tmin - 0.5, tmax + 0.5, picks=('C3', 'Cz', 'C4'), baseline=None, preload=True) """ Explanation: Now we can create 5s epochs around events of interest. End of explanation """ freqs = np.arange(2, 36) # frequencies from 2-35Hz vmin, vmax = -1, 1.5 # set min and max ERDS values in plot baseline = [-1, 0] # baseline interval (in s) cnorm = TwoSlopeNorm(vmin=vmin, vcenter=0, vmax=vmax) # min, center & max ERDS kwargs = dict(n_permutations=100, step_down_p=0.05, seed=1, buffer_size=None, out_type='mask') # for cluster test """ Explanation: Here we set suitable values for computing ERDS maps. End of explanation """ tfr = tfr_multitaper(epochs, freqs=freqs, n_cycles=freqs, use_fft=True, return_itc=False, average=False, decim=2) tfr.crop(tmin, tmax).apply_baseline(baseline, mode="percent") for event in event_ids: # select desired epochs for visualization tfr_ev = tfr[event] fig, axes = plt.subplots(1, 4, figsize=(12, 4), gridspec_kw={"width_ratios": [10, 10, 10, 1]}) for ch, ax in enumerate(axes[:-1]): # for each channel # positive clusters _, c1, p1, _ = pcluster_test(tfr_ev.data[:, ch], tail=1, **kwargs) # negative clusters _, c2, p2, _ = pcluster_test(tfr_ev.data[:, ch], tail=-1, **kwargs) # note that we keep clusters with p <= 0.05 from the combined clusters # of two independent tests; in this example, we do not correct for # these two comparisons c = np.stack(c1 + c2, axis=2) # combined clusters p = np.concatenate((p1, p2)) # combined p-values mask = c[..., p <= 0.05].any(axis=-1) # plot TFR (ERDS map with masking) tfr_ev.average().plot([ch], cmap="RdBu", cnorm=cnorm, axes=ax, colorbar=False, show=False, mask=mask, mask_style="mask") ax.set_title(epochs.ch_names[ch], fontsize=10) ax.axvline(0, linewidth=1, color="black", linestyle=":") # event if ch != 0: ax.set_ylabel("") ax.set_yticklabels("") fig.colorbar(axes[0].images[-1], cax=axes[-1]).ax.set_yscale("linear") fig.suptitle(f"ERDS ({event})") plt.show() """ Explanation: Finally, we perform time/frequency decomposition over all epochs. End of explanation """ df = tfr.to_data_frame(time_format=None) df.head() """ Explanation: Similar to ~mne.Epochs objects, we can also export data from ~mne.time_frequency.EpochsTFR and ~mne.time_frequency.AverageTFR objects to a :class:Pandas DataFrame &lt;pandas.DataFrame&gt;. By default, the time column of the exported data frame is in milliseconds. Here, to be consistent with the time-frequency plots, we want to keep it in seconds, which we can achieve by setting time_format=None: End of explanation """ df = tfr.to_data_frame(time_format=None, long_format=True) # Map to frequency bands: freq_bounds = {'_': 0, 'delta': 3, 'theta': 7, 'alpha': 13, 'beta': 35, 'gamma': 140} df['band'] = pd.cut(df['freq'], list(freq_bounds.values()), labels=list(freq_bounds)[1:]) # Filter to retain only relevant frequency bands: freq_bands_of_interest = ['delta', 'theta', 'alpha', 'beta'] df = df[df.band.isin(freq_bands_of_interest)] df['band'] = df['band'].cat.remove_unused_categories() # Order channels for plotting: df['channel'] = df['channel'].cat.reorder_categories(('C3', 'Cz', 'C4'), ordered=True) g = sns.FacetGrid(df, row='band', col='channel', margin_titles=True) g.map(sns.lineplot, 'time', 'value', 'condition', n_boot=10) axline_kw = dict(color='black', linestyle='dashed', linewidth=0.5, alpha=0.5) g.map(plt.axhline, y=0, **axline_kw) g.map(plt.axvline, x=0, **axline_kw) g.set(ylim=(None, 1.5)) g.set_axis_labels("Time (s)", "ERDS (%)") g.set_titles(col_template="{col_name}", row_template="{row_name}") g.add_legend(ncol=2, loc='lower center') g.fig.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.08) """ Explanation: This allows us to use additional plotting functions like :func:seaborn.lineplot to plot confidence bands: End of explanation """ df_mean = (df.query('time > 1') .groupby(['condition', 'epoch', 'band', 'channel'])[['value']] .mean() .reset_index()) g = sns.FacetGrid(df_mean, col='condition', col_order=['hands', 'feet'], margin_titles=True) g = (g.map(sns.violinplot, 'channel', 'value', 'band', n_boot=10, palette='deep', order=['C3', 'Cz', 'C4'], hue_order=freq_bands_of_interest, linewidth=0.5).add_legend(ncol=4, loc='lower center')) g.map(plt.axhline, **axline_kw) g.set_axis_labels("", "ERDS (%)") g.set_titles(col_template="{col_name}", row_template="{row_name}") g.fig.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.3) """ Explanation: Having the data as a DataFrame also facilitates subsetting, grouping, and other transforms. Here, we use seaborn to plot the average ERDS in the motor imagery interval as a function of frequency band and imagery condition: End of explanation """
danijel3/ASRDemos
notebooks/MLP_TIMIT_ctx10.ipynb
apache-2.0
import os os.environ['CUDA_VISIBLE_DEVICES']='1' import numpy as np from keras.models import Sequential from keras.layers.core import Dense, Activation, Reshape from keras.optimizers import Adam, SGD from IPython.display import clear_output from tqdm import * """ Explanation: Using the frame context in the TIMIT MLP model This notebook is an extension of the MLP_TIMIT demo which takes a context of many frames at input to model the same output. So if we have a phoneme, say 'a', instead of just using one vector of 26 features to recognize it, we provide several frames of 26 features before and after the one we are looking at, in order to capture its context. This technique helps greatly improve the the quality of the solution, but isn't as scalable as some other solutions. First of all, the greater the context, the more parameters we need to determine. The bigger the model, the more data is required to accurately appraise all the parameters. One solution would be to use tied weights, rather then a classical dense layer in such a way that different frames (within the context) use the same set of weights, so the number of weights is kept constant even though we use a larger context. Furthermore, the model assumes a context of a specific size. It would be nice if the size is unlimited. Again, this would probably make the model impractical if we use a standard dense layer, but could work with the tied weights technique. Another way of looking at the tied weights solution that has an unlimited context is simply as an RNN. In fact, most implementions of BPTT (used to train RNN) simply unroll the training loop in time and treat the model as a simple MLP with tied weights. This works quite well, but has other issues that are solved using more advanced topologies (LSTM, GRU) which will be dicussed in other notebooks. In this notebook, we will take an MLP which has an input context of 10 frames on the left and the right side of the analyzed frame. This is done in order to reproduce the resutls from the same paper and thesis as in the MLP_TIMIT notebook. We begin with the same introductory code as in the previous notebook: End of explanation """ import sys sys.path.append('../python') from data import Corpus, History train=Corpus('../data/TIMIT_train.hdf5',load_normalized=True,merge_utts=False) dev=Corpus('../data/TIMIT_dev.hdf5',load_normalized=True,merge_utts=False) test=Corpus('../data/TIMIT_test.hdf5',load_normalized=True,merge_utts=False) tr_in,tr_out_dec=train.get() dev_in,dev_out_dec=dev.get() tst_in,tst_out_dec=test.get() for u in range(tr_in.shape[0]): tr_in[u]=tr_in[u][:,:26] for u in range(dev_in.shape[0]): dev_in[u]=dev_in[u][:,:26] for u in range(tst_in.shape[0]): tst_in[u]=tst_in[u][:,:26] """ Explanation: Loading the data End of explanation """ input_dim=tr_in[0].shape[1] output_dim=61 hidden_num=250 epoch_num=1000 """ Explanation: Global training parameters End of explanation """ def dec2onehot(dec): ret=[] for u in dec: assert np.all(u<output_dim) num=u.shape[0] r=np.zeros((num,output_dim)) r[range(0,num),u]=1 ret.append(r) return np.array(ret) tr_out=dec2onehot(tr_out_dec) dev_out=dec2onehot(dev_out_dec) tst_out=dec2onehot(tst_out_dec) """ Explanation: 1-hot output End of explanation """ #adds context to data ctx_fr=10 ctx_size=2*ctx_fr+1 def ctx(data): ret=[] for utt in data: l=utt.shape[0] ur=[] for t in range(l): f=[] for s in range(t-ctx_fr,t+ctx_fr+1): if(s<0): s=0 if(s>=l): s=l-1 f.append(utt[s,:]) ur.append(f) ret.append(np.array(ur)) return np.array(ret) tr_in=ctx(tr_in) dev_in=ctx(dev_in) tst_in=ctx(tst_in) print tr_in.shape print tr_in[0].shape """ Explanation: Adding frame context Here we add the frame context. The number 10 is taken from the paper thesis as: symmetrical time-windows from 0 to 10 frames. Now I'm not 100% sure (and it's not explained anywhere), but I assume this means 10 frames on the left and 10 on the right (i.e. symmetrical), which gives 21 frames alltogether. It's written elsewhere that 0 means no context and uses one frame. In Keras/Python we implement this in a slightly roundabout way: instead of duplicating the data explicitly, we merely make a 3D array that contains the references to the same data ranges in different cells. In other words, if we make an array where each utterance has a shape $(time_steps, context*frame_size)$, I think it would take more memory than by using the shape $(time_steps,context,frame_size)$, because in the latter case the same vector (located somewhere in the memory) can be reused in different cotenxts and time steps. End of explanation """ model = Sequential() model.add(Reshape(input_shape=(ctx_size,input_dim),target_shape=(ctx_size*input_dim,))) model.add(Dense(output_dim=hidden_num)) model.add(Activation('sigmoid')) model.add(Dense(output_dim=output_dim)) model.add(Activation('softmax')) optimizer= SGD(lr=1e-3,momentum=0.9,nesterov=False) loss='categorical_crossentropy' metrics=['accuracy'] model.compile(loss=loss, optimizer=optimizer,metrics=['accuracy']) """ Explanation: Model definition Since we have an input as a 3D shape, we use a Reshape layer at the start of the model to convert the input frames into a flat vector. Again, this is to save a little memory at the cost of time it takes to reshape the input. Not sure if its worth it or if it even works as intended (ie. saving memory). Evertyhing else here is the same as with the standard MLP except for the learning rate which has to be lower in order to reproduce the same results as in the thesis. End of explanation """ from random import shuffle tr_hist=History('Train') dev_hist=History('Dev') tst_hist=History('Test') tr_it=range(tr_in.shape[0]) for e in range(epoch_num): print 'Epoch #{}/{}'.format(e+1,epoch_num) sys.stdout.flush() shuffle(tr_it) for u in tqdm(tr_it): l,a=model.train_on_batch(tr_in[u],tr_out[u]) tr_hist.r.addLA(l,a,tr_out[u].shape[0]) clear_output() tr_hist.log() for u in range(dev_in.shape[0]): l,a=model.test_on_batch(dev_in[u],dev_out[u]) dev_hist.r.addLA(l,a,dev_out[u].shape[0]) dev_hist.log() for u in range(tst_in.shape[0]): l,a=model.test_on_batch(tst_in[u],tst_out[u]) tst_hist.r.addLA(l,a,tst_out[u].shape[0]) tst_hist.log() print 'Done!' """ Explanation: Training End of explanation """ import matplotlib.pyplot as P %matplotlib inline fig,ax=P.subplots(2,sharex=True,figsize=(12,10)) ax[0].set_title('Loss') ax[0].plot(tr_hist.loss,label='Train') ax[0].plot(dev_hist.loss,label='Dev') ax[0].plot(tst_hist.loss,label='Test') ax[0].legend() ax[0].set_ylim((0.8,2)) ax[1].set_title('PER %') ax[1].plot(100*(1-np.array(tr_hist.acc)),label='Train') ax[1].plot(100*(1-np.array(dev_hist.acc)),label='Dev') ax[1].plot(100*(1-np.array(tst_hist.acc)),label='Test') ax[1].legend() ax[1].set_ylim((32,42)) """ Explanation: Plotting progress These can be handy for debugging. If you draw the graph with using different hyperparameters you can establish if it underfits (i.e. the values are still decreasing at the end of the training) or overfits (the minimum is reached earlier and dev/test values begin increasing as train continues to decrease). In this case, you can see hoe the graph changes with different learning rate values. It's impossible to achieve a single optimal value, but this one seems to be fairly good. End of explanation """ print 'Min test PER: {:%}'.format(1-np.max(tst_hist.acc)) print 'Min dev PER epoch: #{}'.format((np.argmax(dev_hist.acc)+1)) print 'Test PER on min dev: {:%}'.format(1-tst_hist.acc[np.argmax(dev_hist.acc)]) """ Explanation: Final result Here we reached the value from the thesis just fine, but we used a different learning rate. For some reason, the value from the thesis underfits by a great margin. Not sure if it's a mistake in the thesis or a consequence End of explanation """ wer=0.36999999 print 'Epoch where PER reached {:%}: #{}'.format(wer,np.where((1-np.array(tst_hist.acc))<wer)[0][0]) """ Explanation: Just as before we can check what epoch we reached the optimum. End of explanation """ err=0 cnt=0 for u in range(tst_in.shape[0]): p=model.predict_on_batch(tst_in[u]) c=np.argmax(p,axis=-1) err+=np.sum(c!=tst_out_dec[u]) cnt+=tst_out[u].shape[0] print 'Manual PER: {:%}'.format(err/float(cnt)) print 'PER using average: {:%}'.format(1-tst_hist.acc[-1]) """ Explanation: Checking the accuracy calculation When computing the final loss value, we simply measure the mean of the consecutive batch loss values, because we assume that weight updates are performed once per batch and the mean loss of the whole batch is used in the cross entropy to asses the model (just like in MSE). With accuracy, however, it's not that simple as using the mean of all the batch accuracies. What we use instad is a weighted average where the weights are determined by the length of each batch/uterance. To make sure this is correct, I do a simple experiment here where I manually count the errors and sample amounts using the predict method. We can see that the values are identical, so using the weighted average is fine. End of explanation """
obust/Pandas-Tutorial
Case Study - MovieLens.ipynb
mit
import pandas as pd import numpy as np import matplotlib.pyplot as plt pd.set_option('max_columns', 50) # pass in column names for each CSV u_cols = ['user_id', 'age', 'sex', 'occupation', 'zip_code'] users = pd.read_csv('data/ml-100k/u.user', sep='|', names=u_cols) r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp'] ratings = pd.read_csv('data/ml-100k/u.data', sep='\t', names=r_cols) # the movies file contains columns indicating the movie's genres # let's only load the first five columns of the file with usecols m_cols = ['movie_id', 'title', 'release_date', 'video_release_date', 'imdb_url'] movies = pd.read_csv('data/ml-100k/u.item', sep='|', names=m_cols, usecols=range(5)) # create one merged DataFrame movie_ratings = pd.merge(movies, ratings) lens = pd.merge(movie_ratings, users) """ Explanation: Case Study - MovieLens To show pandas in a more "applied" sense, let's use it to answer some questions about the MovieLens dataset.<br> The dataset contains 100,000 ratings made by 943 users on 1,682 movies. The MovieLens data is a good example for this because it has a lot of inter-ralationships that will requiere joining/grouping : - the datasets users and ratings are linked together by a key (in this case, the user_id and movie_id). - a rating requires both a user and a movie. - a user can be associated with zero or many ratings and movies. - a movie can be rated zero or many times, by a number of different users. End of explanation """ print movies.head(3), '\n' print ratings.head(3), '\n' print users.head(3) print lens.head() lens.describe() """ Explanation: Summary What are the 10 most rated movies ? Which movies are most controversial amongst different age ? End of explanation """ most_rated = lens.groupby('title').size().order(ascending=False)[:10] print most_rated """ Explanation: What are the 10 most rated movies ? Split the DataFrame into groups by movie title Apply the size method to get the count of records in each group Order our results in descending order Limit the output to the top 10 using Python's slicing syntax. using .groupby() End of explanation """ # using .value_count() lens.title.value_counts()[:10] """ Explanation: using .value_count() End of explanation """ movie_stats = lens.groupby('title').agg({'rating': [np.size, np.mean]}) print movie_stats.head() """ Explanation: Which movies are most highly rated ? Split the DataFrame into groups by movie title Apply the size and mean to each group using .agg()method Order the results by average rating in descending order Keep only movies with at least 100 ratings We can use the .agg() method to pass a dictionary specifying the columns to aggregate (as keys) and a list of functions we'd like to apply. End of explanation """ print movie_stats.sort([('rating', 'mean')], ascending=False).head() """ Explanation: Let's sort the resulting DataFrame so that we can see which movies have the highest average score.<br> Because movie_stats is a DataFrame, we use the sort method (only Series objects use order).<br> Additionally, because our columns are now a MultiIndex, we need to pass in a tuple specifying how to sort. End of explanation """ atleast_100 = movie_stats['rating']['size'] > 100 print movie_stats[atleast_100].sort([('rating', 'mean')], ascending=False).head() """ Explanation: The above movies are rated so rarely that we can't count them as quality films. Let's only look at movies that have been rated at least 100 times. End of explanation """ most_50 = lens.groupby('movie_id').size().order(ascending=False)[:50] """ Explanation: Which movies are most controversial amongst different age ? Bin users into age groups using pandas.cut() Split the DataFrame into groups by age groups Apply the size and mean to each group using .agg()method Order the results by average rating in descending order Limiting our population going forward<br> Going forward, let's only look at the 50 most rated movies. Let's make a Series of movies that meet this threshold so we can use it for filtering later. End of explanation """ users.age.hist(bins=30) plt.title("Distribution of users' ages") plt.ylabel('count of users') plt.xlabel('age'); labels = ['0-9', '10-19', '20-29', '30-39', '40-49', '50-59', '60-69', '70-79'] bins = range(0, 81, 10) # [0, 10, 20, 30, 40, 50, 60, 70, 80] lens['age_group'] = pd.cut(lens.age, bins, right=False, labels=labels) print lens[['age', 'age_group']].drop_duplicates()[:5] # preview of age bin """ Explanation: Let's look at how these movies are viewed across different age groups. First, let's look at how age is distributed amongst our users. End of explanation """ print lens.groupby('age_group').agg({'rating': [np.size, np.mean]}) """ Explanation: Now we can now compare ratings across age groups. End of explanation """ lens.set_index('movie_id', inplace=True) by_age = lens.ix[most_50.index].groupby(['title', 'age_group']) by_age.rating.mean().head(15) """ Explanation: Young users seem a bit more critical than other age groups. Let's look at how the 50 most rated movies are viewed across each age group. We can use the most_50 Series we created earlier for filtering. End of explanation """ by_age.rating.mean().unstack(1)[10:20] """ Explanation: Notice that both the title and age group are indexes here, with the average rating value being a Series. This is going to produce a really long list of values. Wouldn't it be nice to see the data as a table? Each title as a row, each age group as a column, and the average rating in each cell. Behold! The magic of unstack! End of explanation """ lens.reset_index('movie_id', inplace=True) pivoted = pd.pivot_table(lens, values='rating', index=['movie_id', 'title'], ... columns=['sex'], fill_value=0) print pivoted.head() pivoted['diff'] = pivoted.M - pivoted.F print pivoted.head() pivoted.reset_index('movie_id', inplace=True) disagreements = pivoted[pivoted.movie_id.isin(most_50.index)]['diff'] disagreements.order().plot(kind='barh', figsize=[9, 15]) plt.title('Male vs. Female Avg. Ratings\n(Difference > 0 = Favored by Men)') plt.ylabel('Title') plt.xlabel('Average Rating Difference'); """ Explanation: unstack, well, unstacks the specified level of a MultiIndex (by default, groupby turns the grouped field into an index - since we grouped by two fields, it became a MultiIndex). We unstacked the second index (remember that Python uses 0-based indexes), and then filled in NULL values with 0. If we would have used: python by_age.rating.mean().unstack(0).fillna(0) We would have had our age groups as rows and movie titles as columns. Which movies do men and women most disagree on? DataFrame's have a pivot_table method that makes these kinds of operations much easier (and less verbose). End of explanation """
dtamayo/reboundx
ipython_examples/ModifyMass.ipynb
gpl-3.0
import rebound import reboundx import numpy as np M0 = 1. # initial mass of star def makesim(): sim = rebound.Simulation() sim.G = 4*np.pi**2 # use units of AU, yrs and solar masses sim.add(m=M0) sim.add(a=1.) sim.add(a=2.) sim.add(a=3.) sim.move_to_com() return sim %matplotlib inline sim = makesim() ps = sim.particles fig = rebound.OrbitPlot(sim) """ Explanation: Adding exponential mass loss/growth You can always modify the mass of particles between calls to sim.integrate. However, if you want to apply the mass/loss growth every timestep within calls to sim.integrate, you should use this. We begin by setting up a system with 3 planets. End of explanation """ rebx = reboundx.Extras(sim) modifymass = rebx.load_operator("modify_mass") rebx.add_operator(modifymass) """ Explanation: We now add mass loss through REBOUNDx: End of explanation """ ps[0].params["tau_mass"] = -1.e4 """ Explanation: Now we set the e-folding mass loss/growth rate. Positive timescales give growth, negative timescales loss. Here we have the star lose mass with an e-folding timescale of $10^4$ yrs. End of explanation """ Nout = 1000 mass = np.zeros(Nout) times = np.linspace(0., 1.e4, Nout) for i, time in enumerate(times): sim.integrate(time) mass[i] = sim.particles[0].m fig = rebound.OrbitPlot(sim) """ Explanation: Now we integrate for one e-folding timescale, and plot the resulting system: End of explanation """ pred = M0*np.e**(times/ps[0].params["tau_mass"]) import matplotlib.pyplot as plt fig = plt.figure(figsize=(15,5)) ax = plt.subplot(111) ax.plot(times,mass, label='simulation') ax.plot(times,pred, label='predicted') ax.set_xlabel("Time (yrs)", fontsize=24) ax.set_ylabel("Star's Mass (MSun)", fontsize=24) ax.legend(fontsize=24) """ Explanation: We see that after the mass of the star has decayed by a factor of e, the scale of the system has expanded by the corresponding factor, as one would expect. If we plot the mass of the star vs time, compared to an exponential decay, the two overlap. End of explanation """
mne-tools/mne-tools.github.io
0.24/_downloads/b99fcf919e5d2f612fcfee22adcfc330/40_autogenerate_metadata.ipynb
bsd-3-clause
from pathlib import Path import matplotlib.pyplot as plt import mne data_dir = Path(mne.datasets.erp_core.data_path()) infile = data_dir / 'ERP-CORE_Subject-001_Task-Flankers_eeg.fif' raw = mne.io.read_raw(infile, preload=True) raw.filter(l_freq=0.1, h_freq=40) raw.plot(start=60) # extract events all_events, all_event_id = mne.events_from_annotations(raw) """ Explanation: Auto-generating Epochs metadata This tutorial shows how to auto-generate metadata for ~mne.Epochs, based on events via mne.epochs.make_metadata. We are going to use data from the erp-core-dataset (derived from :footcite:Kappenman2021). This is EEG data from a single participant performing an active visual task (Eriksen flanker task). <div class="alert alert-info"><h4>Note</h4><p>If you wish to skip the introductory parts of this tutorial, you may jump straight to `tut-autogenerate-metadata-ern` after completing the data import and event creation in the `tut-autogenerate-metadata-preparation` section.</p></div> This tutorial is loosely divided into two parts: We will first focus on producing ERP time-locked to the visual stimulation, conditional on response correctness and response time in order to familiarize ourselves with the ~mne.epochs.make_metadata function. After that, we will calculate ERPs time-locked to the responses – again, conditional on response correctness – to visualize the error-related negativity (ERN), i.e. the ERP component associated with incorrect behavioral responses. Preparation Let's start by reading, filtering, and producing a simple visualization of the raw data. The data is pretty clean and contains very few blinks, so there's no need to apply sophisticated preprocessing and data cleaning procedures. We will also convert the ~mne.Annotations contained in this dataset to events by calling mne.events_from_annotations. End of explanation """ # metadata for each epoch shall include events from the range: [0.0, 1.5] s, # i.e. starting with stimulus onset and expanding beyond the end of the epoch metadata_tmin, metadata_tmax = 0.0, 1.5 # auto-create metadata # this also returns a new events array and an event_id dictionary. we'll see # later why this is important metadata, events, event_id = mne.epochs.make_metadata( events=all_events, event_id=all_event_id, tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq']) # let's look at what we got! metadata """ Explanation: Creating metadata from events The basics of make_metadata Now it's time to think about the time windows to use for epoching and metadata generation. It is important to understand that these time windows need not be the same! That is, the automatically generated metadata might include information about events from only a fraction of the epochs duration; or it might include events that occurred well outside a given epoch. Let us look at a concrete example. In the Flankers task of the ERP CORE dataset, participants were required to respond to visual stimuli by pressing a button. We're interested in looking at the visual evoked responses (ERPs) of trials with correct responses. Assume that based on literature studies, we decide that responses later than 1500 ms after stimulus onset are to be considered invalid, because they don't capture the neuronal processes of interest here. We can approach this in the following way with the help of mne.epochs.make_metadata: End of explanation """ row_events = ['stimulus/compatible/target_left', 'stimulus/compatible/target_right', 'stimulus/incompatible/target_left', 'stimulus/incompatible/target_right'] metadata, events, event_id = mne.epochs.make_metadata( events=all_events, event_id=all_event_id, tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'], row_events=row_events) metadata """ Explanation: Specifying time-locked events We can see that the generated table has 802 rows, each one corresponding to an individual event in all_events. The first column, event_name, contains the name of the respective event around which the metadata of that specific column was generated – we'll call that the "time-locked event", because we'll assign it time point zero. The names of the remaining columns correspond to the event names specified in the all_event_id dictionary. These columns contain floats; the values represent the latency of that specific event in seconds, relative to the time-locked event (the one mentioned in the event_name column). For events that didn't occur within the given time window, you'll see a value of NaN, simply indicating that no event latency could be extracted. Now, there's a problem here. We want investigate the visual ERPs only, conditional on responses. But the metadata that was just created contains one row for every event, including responses. While we could create epochs for all events, allowing us to pass those metadata, and later subset the created events, there's a more elegant way to handle things: ~mne.epochs.make_metadata has a row_events parameter that allows us to specify for which events to create metadata rows, while still creating columns for all events in the event_id dictionary. Because the metadata, then, only pertains to a subset of our original events, it's important to keep the returned events and event_id around for later use when we're actually going to create our epochs, to ensure that metadata, events, and event descriptions stay in sync. End of explanation """ keep_first = 'response' metadata, events, event_id = mne.epochs.make_metadata( events=all_events, event_id=all_event_id, tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'], row_events=row_events, keep_first=keep_first) # visualize response times regardless of side metadata['response'].plot.hist(bins=50, title='Response Times') # the "first_response" column contains only "left" and "right" entries, derived # from the initial event named "response/left" and "response/right" print(metadata['first_response']) """ Explanation: Keeping only the first events of a group The metadata now contains 400 rows – one per stimulation – and the same number of columns as before. Great! We have two types of responses in our data: response/left and response/right. We would like to map those to "correct" and "incorrect". To make this easier, we can ask ~mne.epochs.make_metadata to generate an entirely new column that refers to the first response observed during the given time interval. This works by passing a subset of the :term:hierarchical event descriptors (HEDs, inspired by :footcite:BigdelyShamloEtAl2013) used to name events via the keep_first parameter. For example, in the case of the HEDs response/left and response/right, we could pass keep_first='response' to generate a new column, response, containing the latency of the respective event. This value pertains only the first (or, in this specific example: the only) response, regardless of side (left or right). To indicate which event type (here: response side) was matched, a second column is added: first_response. The values in this column are the event types without the string used for matching, as it is already encoded as the column name, i.e. in our example, we expect it to only contain 'left' and 'right'. End of explanation """ metadata.loc[metadata['stimulus/compatible/target_left'].notna() & metadata['stimulus/compatible/target_right'].notna(), :] """ Explanation: We're facing a similar issue with the stimulus events, and now there are not only two, but four different types: stimulus/compatible/target_left, stimulus/compatible/target_right, stimulus/incompatible/target_left, and stimulus/incompatible/target_right. Even more, because in the present paradigm stimuli were presented in rapid succession, sometimes multiple stimulus events occurred within the 1.5 second time window we're using to generate our metadata. See for example: End of explanation """ keep_first = ['stimulus', 'response'] metadata, events, event_id = mne.epochs.make_metadata( events=all_events, event_id=all_event_id, tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'], row_events=row_events, keep_first=keep_first) # all times of the time-locked events should be zero assert all(metadata['stimulus'] == 0) # the values in the new "first_stimulus" and "first_response" columns indicate # which events were selected via "keep_first" metadata[['first_stimulus', 'first_response']] """ Explanation: This can easily lead to confusion during later stages of processing, so let's create a column for the first stimulus – which will always be the time-locked stimulus, as our time interval starts at 0 seconds. We can pass a list of strings to keep_first. End of explanation """ # left-side stimulation metadata.loc[metadata['first_stimulus'].isin(['compatible/target_left', 'incompatible/target_left']), 'stimulus_side'] = 'left' # right-side stimulation metadata.loc[metadata['first_stimulus'].isin(['compatible/target_right', 'incompatible/target_right']), 'stimulus_side'] = 'right' # first assume all responses were incorrect, then mark those as correct where # the stimulation side matches the response side metadata['response_correct'] = False metadata.loc[metadata['stimulus_side'] == metadata['first_response'], 'response_correct'] = True correct_response_count = metadata['response_correct'].sum() print(f'Correct responses: {correct_response_count}\n' f'Incorrect responses: {len(metadata) - correct_response_count}') """ Explanation: Adding new columns to describe stimulation side and response correctness Perfect! Now it's time to define which responses were correct and incorrect. We first add a column encoding the side of stimulation, and then simply check whether the response matches the stimulation side, and add this result to another column. End of explanation """ epochs_tmin, epochs_tmax = -0.1, 0.4 # epochs range: [-0.1, 0.4] s reject = {'eeg': 250e-6} # exclude epochs with strong artifacts epochs = mne.Epochs(raw=raw, tmin=epochs_tmin, tmax=epochs_tmax, events=events, event_id=event_id, metadata=metadata, reject=reject, preload=True) """ Explanation: Creating Epochs with metadata, and visualizing ERPs It's finally time to create our epochs! We set the metadata directly on instantiation via the metadata parameter. Also it is important to remember to pass events and event_id as returned from ~mne.epochs.make_metadata, as we only created metadata for a subset of our original events by passing row_events. Otherwise, the length of the metadata and the number of epochs would not match and MNE-Python would raise an error. End of explanation """ vis_erp = epochs['response_correct'].average() vis_erp_slow = epochs['(not response_correct) & ' '(response > 0.3)'].average() fig, ax = plt.subplots(2, figsize=(6, 6)) vis_erp.plot(gfp=True, spatial_colors=True, axes=ax[0]) vis_erp_slow.plot(gfp=True, spatial_colors=True, axes=ax[1]) ax[0].set_title('Visual ERPs – All Correct Responses') ax[1].set_title('Visual ERPs – Slow Correct Responses') fig.tight_layout() fig """ Explanation: Lastly, let's visualize the ERPs evoked by the visual stimulation, once for all trials with correct responses, and once for all trials with correct responses and a response time greater than 0.5 seconds (i.e., slow responses). End of explanation """ metadata_tmin, metadata_tmax = -1.5, 0 row_events = ['response/left', 'response/right'] keep_last = ['stimulus', 'response'] metadata, events, event_id = mne.epochs.make_metadata( events=all_events, event_id=all_event_id, tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'], row_events=row_events, keep_last=keep_last) """ Explanation: Aside from the fact that the data for the (much fewer) slow responses looks noisier – which is entirely to be expected – not much of an ERP difference can be seen. Applying the knowledge: visualizing the ERN component In the following analysis, we will use the same dataset as above, but we'll time-lock our epochs to the response events, not to the stimulus onset. Comparing ERPs associated with correct and incorrect behavioral responses, we should be able to see the error-related negativity (ERN) in the difference wave. Since we want to time-lock our analysis to responses, for the automated metadata generation we'll consider events occurring up to 1500 ms before the response trigger. We only wish to consider the last stimulus and response in each time window: Remember that we're dealing with rapid stimulus presentations in this paradigm; taking the last response – at time point zero – and the last stimulus – the one closest to the response – ensures we actually create the right stimulus-response pairings. We can achieve this by passing the keep_last parameter, which works exactly like keep_first we got to know above, only that it keeps the last occurrences of the specified events and stores them in columns whose names start with last_. End of explanation """ # left-side stimulation metadata.loc[metadata['last_stimulus'].isin(['compatible/target_left', 'incompatible/target_left']), 'stimulus_side'] = 'left' # right-side stimulation metadata.loc[metadata['last_stimulus'].isin(['compatible/target_right', 'incompatible/target_right']), 'stimulus_side'] = 'right' # first assume all responses were incorrect, then mark those as correct where # the stimulation side matches the response side metadata['response_correct'] = False metadata.loc[metadata['stimulus_side'] == metadata['last_response'], 'response_correct'] = True metadata """ Explanation: Exactly like in the previous example, create new columns stimulus_side and response_correct. End of explanation """ epochs_tmin, epochs_tmax = -0.6, 0.4 baseline = (-0.4, -0.2) reject = {'eeg': 250e-6} epochs = mne.Epochs(raw=raw, tmin=epochs_tmin, tmax=epochs_tmax, baseline=baseline, reject=reject, events=events, event_id=event_id, metadata=metadata, preload=True) """ Explanation: Now it's already time to epoch the data! When deciding upon the epochs duration for this specific analysis, we need to ensure we see quite a bit of signal from before and after the motor response. We also must be aware of the fact that motor-/muscle-related signals will most likely be present before the response button trigger pulse appears in our data, so the time period close to the response event should not be used for baseline correction. But at the same time, we don't want to use a baseline period that extends too far away from the button event. The following values seem to work quite well. End of explanation """ epochs.metadata.loc[epochs.metadata['last_stimulus'].isna(), :] """ Explanation: Let's do a final sanity check: we want to make sure that in every row, we actually have a stimulus. We use epochs.metadata (and not metadata) because when creating the epochs, we passed the reject parameter, and MNE-Python always ensures that epochs.metadata stays in sync with the available epochs. End of explanation """ epochs = epochs['last_stimulus.notna()'] """ Explanation: Bummer! It seems the very first two responses were recorded before the first stimulus appeared: the values in the stimulus column are None. There is a very simple way to select only those epochs that do have a stimulus (i.e., are not None): End of explanation """ resp_erp_correct = epochs['response_correct'].average() resp_erp_incorrect = epochs['not response_correct'].average() mne.viz.plot_compare_evokeds({'Correct Response': resp_erp_correct, 'Incorrect Response': resp_erp_incorrect}, picks='FCz', show_sensors=True, title='ERPs at FCz, time-locked to response') # topoplot of average field from time 0.0-0.1 s resp_erp_incorrect.plot_topomap(times=0.05, average=0.05, size=3, title='Avg. topography 0–100 ms after ' 'incorrect responses') """ Explanation: Time to calculate the ERPs for correct and incorrect responses. For visualization, we'll only look at sensor FCz, which is known to show the ERN nicely in the given paradigm. We'll also create a topoplot to get an impression of the average scalp potentials measured in the first 100 ms after an incorrect response. End of explanation """ # difference wave: incorrect minus correct responses resp_erp_diff = mne.combine_evoked([resp_erp_incorrect, resp_erp_correct], weights=[1, -1]) fig, ax = plt.subplots() resp_erp_diff.plot(picks='FCz', axes=ax, selectable=False, show=False) # make ERP trace bolder ax.lines[0].set_linewidth(1.5) # add lines through origin ax.axhline(0, ls='dotted', lw=0.75, color='gray') ax.axvline(0, ls=(0, (10, 10)), lw=0.75, color='gray', label='response trigger') # mark trough trough_time_idx = resp_erp_diff.copy().pick('FCz').data.argmin() trough_time = resp_erp_diff.times[trough_time_idx] ax.axvline(trough_time, ls=(0, (10, 10)), lw=0.75, color='red', label='max. negativity') # legend, axis labels, title ax.legend(loc='lower left') ax.set_xlabel('Time (s)', fontweight='bold') ax.set_ylabel('Amplitude (µV)', fontweight='bold') ax.set_title('Channel: FCz') fig.suptitle('ERN (Difference Wave)', fontweight='bold') fig """ Explanation: We can see a strong negative deflection immediately after incorrect responses, compared to correct responses. The topoplot, too, leaves no doubt: what we're looking at is, in fact, the ERN. Some researchers suggest to construct the difference wave between ERPs for correct and incorrect responses, as it more clearly reveals signal differences, while ideally also improving the signal-to-noise ratio (under the assumption that the noise level in "correct" and "incorrect" trials is similar). Let's do just that and put it into a publication-ready visualization. End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/launching_into_ml/labs/first_model.ipynb
apache-2.0
!pip install --user google-cloud-bigquery==1.25.0 """ Explanation: First BigQuery ML models for Taxifare Prediction In this notebook, we will use BigQuery ML to build our first models for taxifare prediction. BigQuery ML provides a fast way to build ML models on large structured and semi-structured datasets. Learning objectives Choose the correct BigQuery ML model type and specify options. Evaluate the performance of your ML model. Improve model performance through data quality cleanup. Create a Deep Neural Network (DNN) using SQL. Each learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution notebook for reference. We'll start by creating a dataset to hold all the models we create in BigQuery. Import libraries End of explanation """ import os """ Explanation: Restart the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel). End of explanation """ %%bash export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # Do not change these os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID os.environ["REGION"] = REGION if PROJECT == "your-gcp-project-here": print("Don't forget to update your PROJECT name! Currently:", PROJECT) """ Explanation: Set environment variables End of explanation """ %%bash ## Create a BigQuery dataset for serverlessml if it doesn't exist datasetexists=$(bq ls -d | grep -w serverlessml) if [ -n "$datasetexists" ]; then echo -e "BigQuery dataset already exists, let's not recreate it." else echo "Creating BigQuery dataset titled: serverlessml" bq --location=US mk --dataset \ --description 'Taxi Fare' \ $PROJECT:serverlessml echo "\nHere are your current datasets:" bq ls fi ## Create GCS bucket if it doesn't exist already... exists=$(gsutil ls -d | grep -w gs://${BUCKET}/) if [ -n "$exists" ]; then echo -e "Bucket exists, let's not recreate it." else echo "Creating a new GCS bucket." gsutil mb -l ${REGION} gs://${BUCKET} echo "\nHere are your current buckets:" gsutil ls fi """ Explanation: Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called serverlessml if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too. End of explanation """ %%bigquery CREATE OR REPLACE MODEL serverlessml.model1_rawdata # TODO 1: Choose the correct ML model_type for forecasting: # i.e. Linear Regression (linear_reg) or Logistic Regression (logistic_reg) # Enter in the appropriate ML OPTIONS() in the line below: SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count * 1.0 AS passengers FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1 """ Explanation: Model 1: Raw data Let's build a model using just the raw data. It's not going to be very good, but sometimes it is good to actually experience this. The model will take a minute or so to train. When it comes to ML, this is blazing fast. End of explanation """ %%bigquery # TODO 2: Specify the command to evaluate your newly trained model SELECT * FROM """ Explanation: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook. Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data: End of explanation """ %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata) """ Explanation: Let's report just the error we care about, the Root Mean Squared Error (RMSE) End of explanation """ %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata, ( SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count * 1.0 AS passengers # treat as decimal FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 2 # Placeholder for additional filters as part of TODO 3 later )) """ Explanation: We told you it was not going to be good! Recall that our heuristic got 8.13, and our target is $6. Note that the error is going to depend on the dataset that we evaluate it on. We can also evaluate the model on our own held-out benchmark/test dataset, but we shouldn't make a habit of this (we want to keep our benchmark dataset as the final evaluation, not make decisions using it all along the way. If we do that, our test dataset won't be truly independent). End of explanation """ %%bigquery CREATE OR REPLACE TABLE serverlessml.cleaned_training_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count * 1.0 AS passengers FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM serverlessml.cleaned_training_data LIMIT 0 %%bigquery CREATE OR REPLACE MODEL serverlessml.model2_cleanup OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg') AS SELECT * FROM serverlessml.cleaned_training_data %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model2_cleanup) """ Explanation: What was the RMSE from the above? TODO 3: Now apply the below filters to the previous query inside the WHERE clause. Does the performance improve? Why or why not? sql AND trip_distance &gt; 0 AND fare_amount &gt;= 2.5 AND pickup_longitude &gt; -78 AND pickup_longitude &lt; -70 AND dropoff_longitude &gt; -78 AND dropoff_longitude &lt; -70 AND pickup_latitude &gt; 37 AND pickup_latitude &lt; 45 AND dropoff_latitude &gt; 37 AND dropoff_latitude &lt; 45 AND passenger_count &gt; 0 Model 2: Apply data cleanup Recall that we did some data cleanup in the previous lab. Let's do those before training. This is a dataset that we will need quite frequently in this notebook, so let's extract it first. End of explanation """ %%bigquery -- This training takes on the order of 15 minutes. CREATE OR REPLACE MODEL serverlessml.model3b_dnn # TODO 4a: Choose correct BigQuery ML model type for DNN and label field # Options: dnn_regressor, linear_reg, logistic_reg OPTIONS() AS SELECT * FROM serverlessml.cleaned_training_data %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model3b_dnn) """ Explanation: Model 3: More sophisticated models What if we try a more sophisticated model? Let's try Deep Neural Networks (DNNs) in BigQuery: DNN To create a DNN, simply specify dnn_regressor for the model_type and add your hidden layers. End of explanation """ %%bigquery SELECT SQRT(mean_squared_error) AS rmse # TODO 4b: What is the command to see how well a # ML model performed? ML.What? FROM ML.WHATCOMMAND(MODEL serverlessml.model3b_dnn, ( SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count * 1.0 AS passengers, 'unused' AS key FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 )) """ Explanation: Nice! Evaluate DNN on benchmark dataset Let's use the same validation dataset to evaluate -- remember that evaluation metrics depend on the dataset. You can not compare two models unless you have run them on the same withheld data. End of explanation """
yl565/statsmodels
examples/notebooks/markov_autoregression.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt import requests from io import BytesIO # NBER recessions from pandas_datareader.data import DataReader from datetime import datetime usrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1)) """ Explanation: Markov switching autoregression models This notebook provides an example of the use of Markov switching models in Statsmodels to replicate a number of results presented in Kim and Nelson (1999). It applies the Hamilton (1989) filter the Kim (1994) smoother. This is tested against the Markov-switching models from E-views 8, which can be found at http://www.eviews.com/EViews8/ev8ecswitch_n.html#MarkovAR or the Markov-switching models of Stata 14 which can be found at http://www.stata.com/manuals14/tsmswitch.pdf. End of explanation """ # Get the RGNP data to replicate Hamilton from statsmodels.tsa.regime_switching.tests.test_markov_autoregression import rgnp dta_hamilton = pd.Series(rgnp, index=pd.date_range('1951-04-01', '1984-10-01', freq='QS')) # Plot the data dta_hamilton.plot(title='Growth rate of Real GNP', figsize=(12,3)) # Fit the model mod_hamilton = sm.tsa.MarkovAutoregression(dta_hamilton, k_regimes=2, order=4, switching_ar=False) res_hamilton = mod_hamilton.fit() res_hamilton.summary() """ Explanation: Hamilton (1989) switching model of GNP This replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written: $$ y_t = \mu_{S_t} + \phi_1 (y_{t-1} - \mu_{S_{t-1}}) + \phi_2 (y_{t-2} - \mu_{S_{t-2}}) + \phi_3 (y_{t-3} - \mu_{S_{t-3}}) + \phi_4 (y_{t-4} - \mu_{S_{t-4}}) + \varepsilon_t $$ Each period, the regime transitions according to the following matrix of transition probabilities: $$ P(S_t = s_t | S_{t-1} = s_{t-1}) = \begin{bmatrix} p_{00} & p_{10} \ p_{01} & p_{11} \end{bmatrix} $$ where $p_{ij}$ is the probability of transitioning from regime $i$, to regime $j$. The model class is MarkovAutoregression in the time-series part of Statsmodels. In order to create the model, we must specify the number of regimes with k_regimes=2, and the order of the autoregression with order=4. The default model also includes switching autoregressive coefficients, so here we also need to specify switching_ar=False to avoid that. After creation, the model is fit via maximum likelihood estimation. Under the hood, good starting parameters are found using a number of steps of the expectation maximization (EM) algorithm, and a quasi-Newton (BFGS) algorithm is applied to quickly find the maximum. End of explanation """ fig, axes = plt.subplots(2, figsize=(7,7)) ax = axes[0] ax.plot(res_hamilton.filtered_marginal_probabilities[0]) ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1) ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1]) ax.set(title='Filtered probability of recession') ax = axes[1] ax.plot(res_hamilton.smoothed_marginal_probabilities[0]) ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1) ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1]) ax.set(title='Smoothed probability of recession') fig.tight_layout() """ Explanation: We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample. For reference, the shaded periods represent the NBER recessions. End of explanation """ print(res_hamilton.expected_durations) """ Explanation: From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion. End of explanation """ # Get the dataset ew_excs = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn').content raw = pd.read_table(BytesIO(ew_excs), header=None, skipfooter=1, engine='python') raw.index = pd.date_range('1926-01-01', '1995-12-01', freq='MS') dta_kns = raw.ix[:'1986'] - raw.ix[:'1986'].mean() # Plot the dataset dta_kns[0].plot(title='Excess returns', figsize=(12, 3)) # Fit the model mod_kns = sm.tsa.MarkovRegression(dta_kns, k_regimes=3, trend='nc', switching_variance=True) res_kns = mod_kns.fit() res_kns.summary() """ Explanation: In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years. Kim, Nelson, and Startz (1998) Three-state Variance Switching This model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn. The model in question is: $$ \begin{align} y_t & = \varepsilon_t \ \varepsilon_t & \sim N(0, \sigma_{S_t}^2) \end{align} $$ Since there is no autoregressive component, this model can be fit using the MarkovRegression class. Since there is no mean effect, we specify trend='nc'. There are hypotheized to be three regimes for the switching variances, so we specify k_regimes=3 and switching_variance=True (by default, the variance is assumed to be the same across regimes). End of explanation """ fig, axes = plt.subplots(3, figsize=(10,7)) ax = axes[0] ax.plot(res_kns.smoothed_marginal_probabilities[0]) ax.set(title='Smoothed probability of a low-variance regime for stock returns') ax = axes[1] ax.plot(res_kns.smoothed_marginal_probabilities[1]) ax.set(title='Smoothed probability of a medium-variance regime for stock returns') ax = axes[2] ax.plot(res_kns.smoothed_marginal_probabilities[2]) ax.set(title='Smoothed probability of a high-variance regime for stock returns') fig.tight_layout() """ Explanation: Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable. End of explanation """ # Get the dataset filardo = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn').content dta_filardo = pd.read_table(BytesIO(filardo), sep=' +', header=None, skipfooter=1, engine='python') dta_filardo.columns = ['month', 'ip', 'leading'] dta_filardo.index = pd.date_range('1948-01-01', '1991-04-01', freq='MS') dta_filardo['dlip'] = np.log(dta_filardo['ip']).diff()*100 # Deflated pre-1960 observations by ratio of std. devs. # See hmt_tvp.opt or Filardo (1994) p. 302 std_ratio = dta_filardo['dlip']['1960-01-01':].std() / dta_filardo['dlip'][:'1959-12-01'].std() dta_filardo['dlip'][:'1959-12-01'] = dta_filardo['dlip'][:'1959-12-01'] * std_ratio dta_filardo['dlleading'] = np.log(dta_filardo['leading']).diff()*100 dta_filardo['dmdlleading'] = dta_filardo['dlleading'] - dta_filardo['dlleading'].mean() # Plot the data dta_filardo['dlip'].plot(title='Standardized growth rate of industrial production', figsize=(13,3)) plt.figure() dta_filardo['dmdlleading'].plot(title='Leading indicator', figsize=(13,3)); """ Explanation: Filardo (1994) Time-Varying Transition Probabilities This model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn. In the above models we have assumed that the transition probabilities are constant across time. Here we allow the probabilities to change with the state of the economy. Otherwise, the model is the same Markov autoregression of Hamilton (1989). Each period, the regime now transitions according to the following matrix of time-varying transition probabilities: $$ P(S_t = s_t | S_{t-1} = s_{t-1}) = \begin{bmatrix} p_{00,t} & p_{10,t} \ p_{01,t} & p_{11,t} \end{bmatrix} $$ where $p_{ij,t}$ is the probability of transitioning from regime $i$, to regime $j$ in period $t$, and is defined to be: $$ p_{ij,t} = \frac{\exp{ x_{t-1}' \beta_{ij} }}{1 + \exp{ x_{t-1}' \beta_{ij} }} $$ Instead of estimating the transition probabilities as part of maximum likelihood, the regression coefficients $\beta_{ij}$ are estimated. These coefficients relate the transition probabilities to a vector of pre-determined or exogenous regressors $x_{t-1}$. End of explanation """ mod_filardo = sm.tsa.MarkovAutoregression( dta_filardo.ix[2:, 'dlip'], k_regimes=2, order=4, switching_ar=False, exog_tvtp=sm.add_constant(dta_filardo.ix[1:-1, 'dmdlleading'])) np.random.seed(12345) res_filardo = mod_filardo.fit(search_reps=20) res_filardo.summary() """ Explanation: The time-varying transition probabilities are specified by the exog_tvtp parameter. Here we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initial optimization step can be helpful to find the best parameters. Below, we specify that 20 random perturbations from the starting parameter vector are examined and the best one used as the actual starting parameters. Because of the random nature of the search, we seed the random number generator beforehand to allow replication of the result. End of explanation """ fig, ax = plt.subplots(figsize=(12,3)) ax.plot(res_filardo.smoothed_marginal_probabilities[0]) ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='gray', alpha=0.2) ax.set_xlim(dta_filardo.index[6], dta_filardo.index[-1]) ax.set(title='Smoothed probability of a low-production state'); """ Explanation: Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison. End of explanation """ res_filardo.expected_durations[0].plot( title='Expected duration of a low-production state', figsize=(12,3)); """ Explanation: Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time: End of explanation """
RaspberryJamBe/ipython-notebooks
notebooks/nl-be/Communicatie - Cloud bericht 2 - Bericht ontvangen + LED knipperen.ipynb
cc0-1.0
APPKEY = "******" """ Explanation: APPKEY is de Application Key voor een (gratis) http://www.realtime.co/ "Realtime Messaging Free" subscription. Zie "104 - Remote deurbel - Een cloud API gebruiken om berichten te sturen" voor meer gedetailleerde info. End of explanation """ import time import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM) PIN = 18 GPIO.setup(PIN, GPIO.OUT) def flash_led(): GPIO.output(PIN, 1) time.sleep(0.5) GPIO.output(PIN, 0) """ Explanation: Eerst alles opzetten voor de LED (zie 102 - LEDs - De Raspberry Pi GPIO pinnen aansturen voor een illustratie; hier wordt PIN 18 gebruikt, maar vergeet vooral de resistor niet!) End of explanation """ def on_message(sender, channel, message): print("Boodschap ontvangen via {}: {}".format(channel, message)) flash_led() """ Explanation: Bepalen wat je wil dat er gebeurt als er een boodschap binnenkomt End of explanation """ import ortc oc = ortc.OrtcClient() oc.cluster_url = "http://ortc-developers.realtime.co/server/2.1" def on_connected(sender): print('Connected') oc.subscribe('deurbel', True, on_message) oc.set_on_connected_callback(on_connected) oc.connect(APPKEY) """ Explanation: En uiteindelijk inschrijven op de "channel" om boodschappen uit te lezen End of explanation """ GPIO.cleanup() """ Explanation: Et voilá, stuur een boodschap met het verzendscript of via de realtime.co console. End of explanation """
Kaggle/learntools
notebooks/ml_intermediate/raw/ex4.ipynb
apache-2.0
# Set up code checking import os if not os.path.exists("../input/train.csv"): os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv") os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv") from learntools.core import binder binder.bind(globals()) from learntools.ml_intermediate.ex4 import * print("Setup Complete") """ Explanation: In this exercise, you will use pipelines to improve the efficiency of your machine learning code. Setup The questions below will give you feedback on your work. Run the following cell to set up the feedback system. End of explanation """ import pandas as pd from sklearn.model_selection import train_test_split # Read the data X_full = pd.read_csv('../input/train.csv', index_col='Id') X_test_full = pd.read_csv('../input/test.csv', index_col='Id') # Remove rows with missing target, separate target from predictors X_full.dropna(axis=0, subset=['SalePrice'], inplace=True) y = X_full.SalePrice X_full.drop(['SalePrice'], axis=1, inplace=True) # Break off validation set from training data X_train_full, X_valid_full, y_train, y_valid = train_test_split(X_full, y, train_size=0.8, test_size=0.2, random_state=0) # "Cardinality" means the number of unique values in a column # Select categorical columns with relatively low cardinality (convenient but arbitrary) categorical_cols = [cname for cname in X_train_full.columns if X_train_full[cname].nunique() < 10 and X_train_full[cname].dtype == "object"] # Select numerical columns numerical_cols = [cname for cname in X_train_full.columns if X_train_full[cname].dtype in ['int64', 'float64']] # Keep selected columns only my_cols = categorical_cols + numerical_cols X_train = X_train_full[my_cols].copy() X_valid = X_valid_full[my_cols].copy() X_test = X_test_full[my_cols].copy() X_train.head() """ Explanation: You will work with data from the Housing Prices Competition for Kaggle Learn Users. Run the next code cell without changes to load the training and validation sets in X_train, X_valid, y_train, and y_valid. The test set is loaded in X_test. End of explanation """ from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import OneHotEncoder from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error # Preprocessing for numerical data numerical_transformer = SimpleImputer(strategy='constant') # Preprocessing for categorical data categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='most_frequent')), ('onehot', OneHotEncoder(handle_unknown='ignore')) ]) # Bundle preprocessing for numerical and categorical data preprocessor = ColumnTransformer( transformers=[ ('num', numerical_transformer, numerical_cols), ('cat', categorical_transformer, categorical_cols) ]) # Define model model = RandomForestRegressor(n_estimators=100, random_state=0) # Bundle preprocessing and modeling code in a pipeline clf = Pipeline(steps=[('preprocessor', preprocessor), ('model', model) ]) # Preprocessing of training data, fit model clf.fit(X_train, y_train) # Preprocessing of validation data, get predictions preds = clf.predict(X_valid) print('MAE:', mean_absolute_error(y_valid, preds)) """ Explanation: The next code cell uses code from the tutorial to preprocess the data and train a model. Run this code without changes. End of explanation """ # Preprocessing for numerical data numerical_transformer = ____ # Your code here # Preprocessing for categorical data categorical_transformer = ____ # Your code here # Bundle preprocessing for numerical and categorical data preprocessor = ColumnTransformer( transformers=[ ('num', numerical_transformer, numerical_cols), ('cat', categorical_transformer, categorical_cols) ]) # Define model model = ____ # Your code here # Check your answer step_1.a.check() #%%RM_IF(PROD)%% # Preprocessing for numerical data numerical_transformer = SimpleImputer(strategy='constant') # Preprocessing for categorical data categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant')), ('onehot', OneHotEncoder(handle_unknown='ignore')) ]) # Bundle preprocessing for numerical and categorical data preprocessor = ColumnTransformer( transformers=[ ('num', numerical_transformer, numerical_cols), ('cat', categorical_transformer, categorical_cols) ]) # Define model model = RandomForestRegressor(n_estimators=100, random_state=0) step_1.a.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ step_1.a.hint() #_COMMENT_IF(PROD)_ step_1.a.solution() """ Explanation: The code yields a value around 17862 for the mean absolute error (MAE). In the next step, you will amend the code to do better. Step 1: Improve the performance Part A Now, it's your turn! In the code cell below, define your own preprocessing steps and random forest model. Fill in values for the following variables: - numerical_transformer - categorical_transformer - model To pass this part of the exercise, you need only define valid preprocessing steps and a random forest model. End of explanation """ # Bundle preprocessing and modeling code in a pipeline my_pipeline = Pipeline(steps=[('preprocessor', preprocessor), ('model', model) ]) # Preprocessing of training data, fit model my_pipeline.fit(X_train, y_train) # Preprocessing of validation data, get predictions preds = my_pipeline.predict(X_valid) # Evaluate the model score = mean_absolute_error(y_valid, preds) print('MAE:', score) # Check your answer step_1.b.check() #%%RM_IF(PROD)%% step_1.b.assert_check_passed() # Line below will give you a hint #_COMMENT_IF(PROD)_ step_1.b.hint() """ Explanation: Part B Run the code cell below without changes. To pass this step, you need to have defined a pipeline in Part A that achieves lower MAE than the code above. You're encouraged to take your time here and try out many different approaches, to see how low you can get the MAE! (If your code does not pass, please amend the preprocessing steps and model in Part A.) End of explanation """ # Preprocessing of test data, fit model preds_test = ____ # Your code here # Check your answer step_2.check() #%%RM_IF(PROD)%% preds_test = my_pipeline.predict(X_test) step_2.assert_check_passed() # Lines below will give you a hint or solution code #_COMMENT_IF(PROD)_ step_2.hint() #_COMMENT_IF(PROD)_ step_2.solution() """ Explanation: Step 2: Generate test predictions Now, you'll use your trained model to generate predictions with the test data. End of explanation """ # Save test predictions to file output = pd.DataFrame({'Id': X_test.index, 'SalePrice': preds_test}) output.to_csv('submission.csv', index=False) """ Explanation: Run the next code cell without changes to save your results to a CSV file that can be submitted directly to the competition. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/ipsl/cmip6/models/sandbox-3/aerosol.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-3', 'aerosol') """ Explanation: ES-DOC CMIP6 Model Properties - Aerosol MIP Era: CMIP6 Institute: IPSL Source ID: SANDBOX-3 Topic: Aerosol Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. Properties: 70 (38 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-20 15:02:45 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Meteorological Forcings 5. Key Properties --&gt; Resolution 6. Key Properties --&gt; Tuning Applied 7. Transport 8. Emissions 9. Concentrations 10. Optical Radiative Properties 11. Optical Radiative Properties --&gt; Absorption 12. Optical Radiative Properties --&gt; Mixtures 13. Optical Radiative Properties --&gt; Impact Of H2o 14. Optical Radiative Properties --&gt; Radiative Scheme 15. Optical Radiative Properties --&gt; Cloud Interactions 16. Model 1. Key Properties Key properties of the aerosol model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of aerosol model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of aerosol model code End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/volume ratio for aerosols" # "3D number concenttration for aerosols" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Prognostic variables in the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of tracers in the aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are aerosol calculations generalized into families of species? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses atmospheric chemistry time stepping" # "Specific timestepping (operator splitting)" # "Specific timestepping (integrated)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestep Framework Physical properties of seawater in ocean 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the time evolution of the prognostic variables End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol advection (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol physics (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.4. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the aerosol model (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3.5. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Meteorological Forcings ** 4.1. Variables 3D Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Variables 2D Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Two dimensionsal forcing variables, e.g. land-sea mask definition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Frequency Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Frequency with which meteological forcings are applied (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Resolution Resolution in the aersosol model grid 5.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 5.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 5.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for aerosol model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Transport Aerosol transport 7.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of transport in atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Specific transport scheme (eulerian)" # "Specific transport scheme (semi-lagrangian)" # "Specific transport scheme (eulerian and semi-lagrangian)" # "Specific transport scheme (lagrangian)" # TODO - please enter value(s) """ Explanation: 7.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for aerosol transport modeling End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Mass adjustment" # "Concentrations positivity" # "Gradients monotonicity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.3. Mass Conservation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method used to ensure mass conservation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.convention') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Convective fluxes connected to tracers" # "Vertical velocities connected to tracers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.4. Convention Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Transport by convention End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Emissions Atmospheric aerosol emissions 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of emissions in atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Prescribed (climatology)" # "Prescribed CMIP6" # "Prescribed above surface" # "Interactive" # "Interactive above surface" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method used to define aerosol species (several methods allowed because the different species may not use the same method). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Volcanos" # "Bare ground" # "Sea surface" # "Lightning" # "Fires" # "Aircraft" # "Anthropogenic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the aerosol species are taken into account in the emissions scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Interannual" # "Annual" # "Monthly" # "Daily" # TODO - please enter value(s) """ Explanation: 8.4. Prescribed Climatology Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify the climatology type for aerosol emissions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and prescribed via a climatology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.6. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and prescribed as spatially uniform End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.7. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and specified via an interactive method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.8. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and specified via an &quot;other method&quot; End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.9. Other Method Characteristics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Characteristics of the &quot;other method&quot; used for aerosol emissions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Concentrations Atmospheric aerosol concentrations 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of concentrations in atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.4. Prescribed Fields Mmr Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as mass mixing ratios. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.5. Prescribed Fields Aod Plus Ccn Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as AOD plus CCNs. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10. Optical Radiative Properties Aerosol optical and radiative properties 10.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of optical and radiative properties End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11. Optical Radiative Properties --&gt; Absorption Absortion properties in aerosol scheme 11.1. Black Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Dust Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.3. Organics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Optical Radiative Properties --&gt; Mixtures ** 12.1. External Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there external mixing with respect to chemical composition? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12.2. Internal Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there internal mixing with respect to chemical composition? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Mixing Rule Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If there is internal mixing with respect to chemical composition then indicate the mixinrg rule End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13. Optical Radiative Properties --&gt; Impact Of H2o ** 13.1. Size Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact size? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.2. Internal Mixture Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact aerosol internal mixture? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.3. External Mixture Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact aerosol external mixture? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Optical Radiative Properties --&gt; Radiative Scheme Radiative scheme for aerosol 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.2. Shortwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of shortwave bands End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.3. Longwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of longwave bands End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Optical Radiative Properties --&gt; Cloud Interactions Aerosol-cloud interactions 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of aerosol-cloud interactions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.2. Twomey Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the Twomey effect included? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.3. Twomey Minimum Ccn Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the Twomey effect is included, then what is the minimum CCN number? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.4. Drizzle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the scheme affect drizzle? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.5. Cloud Lifetime Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the scheme affect cloud lifetime? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.6. Longwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of longwave bands End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Model Aerosol model 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosperic aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dry deposition" # "Sedimentation" # "Wet deposition (impaction scavenging)" # "Wet deposition (nucleation scavenging)" # "Coagulation" # "Oxidation (gas phase)" # "Oxidation (in cloud)" # "Condensation" # "Ageing" # "Advection (horizontal)" # "Advection (vertical)" # "Heterogeneous chemistry" # "Nucleation" # TODO - please enter value(s) """ Explanation: 16.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the Aerosol model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Radiation" # "Land surface" # "Heterogeneous chemistry" # "Clouds" # "Ocean" # "Cryosphere" # "Gas phase chemistry" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.3. Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other model components coupled to the Aerosol model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.gas_phase_precursors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "DMS" # "SO2" # "Ammonia" # "Iodine" # "Terpene" # "Isoprene" # "VOC" # "NOx" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.4. Gas Phase Precursors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of gas phase aerosol precursors. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Bulk" # "Modal" # "Bin" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.5. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.bulk_scheme_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon / soot" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.6. Bulk Scheme Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of species covered by the bulk scheme. End of explanation """
dipanjanS/text-analytics-with-python
New-Second-Edition/Ch08 - Semantic Analysis/Ch08b - Named Entity Recognition.ipynb
apache-2.0
text = """Three more countries have joined an “international grand committee” of parliaments, adding to calls for Facebook’s boss, Mark Zuckerberg, to give evidence on misinformation to the coalition. Brazil, Latvia and Singapore bring the total to eight different parliaments across the world, with plans to send representatives to London on 27 November with the intention of hearing from Zuckerberg. Since the Cambridge Analytica scandal broke, the Facebook chief has only appeared in front of two legislatures: the American Senate and House of Representatives, and the European parliament. Facebook has consistently rebuffed attempts from others, including the UK and Canadian parliaments, to hear from Zuckerberg. He added that an article in the New York Times on Thursday, in which the paper alleged a pattern of behaviour from Facebook to “delay, deny and deflect” negative news stories, “raises further questions about how recent data breaches were allegedly dealt with within Facebook.” """ print(text) import re text = re.sub(r'\n', '', text) text import spacy nlp = spacy.load('en') text_nlp = nlp(text) # print named entities in article ner_tagged = [(word.text, word.ent_type_) for word in text_nlp] print(ner_tagged) from spacy import displacy # visualize named entities displacy.render(text_nlp, style='ent', jupyter=True) """ Explanation: Named Entity Recognition In any text document, there are particular terms that represent specific entities that are more informative and have a unique context. These entities are known as named entities , which more specifically refer to terms that represent real-world objects like people, places, organizations, and so on, which are often denoted by proper names. Named entity recognition (NER) , also known as entity chunking/extraction , is a popular technique used in information extraction to identify and segment the named entities and classify or categorize them under various predefined classes. There are out of the box NER taggers available through popular libraries like nltk and spacy. Each library follows a different approach to solve the problem. NER with SpaCy End of explanation """ named_entities = [] temp_entity_name = '' temp_named_entity = None for term, tag in ner_tagged: if tag: temp_entity_name = ' '.join([temp_entity_name, term]).strip() temp_named_entity = (temp_entity_name, tag) else: if temp_named_entity: named_entities.append(temp_named_entity) temp_entity_name = '' temp_named_entity = None print(named_entities) from collections import Counter c = Counter([item[1] for item in named_entities]) c.most_common() """ Explanation: Spacy offers fast NER tagger based on a number of techniques. The exact algorithm hasn't been talked about in much detail but the documentation marks it as <font color=blue> "The exact algorithm is a pastiche of well-known methods, and is not currently described in any single publication " </font> The entities identified by spacy NER tagger are as shown in the following table (details here: spacy_documentation) End of explanation """ import os from nltk.tag import StanfordNERTagger JAVA_PATH = r'C:\Program Files\Java\jre1.8.0_192\bin\java.exe' os.environ['JAVAHOME'] = JAVA_PATH STANFORD_CLASSIFIER_PATH = 'E:/stanford/stanford-ner-2014-08-27/classifiers/english.all.3class.distsim.crf.ser.gz' STANFORD_NER_JAR_PATH = 'E:/stanford/stanford-ner-2014-08-27/stanford-ner.jar' sn = StanfordNERTagger(STANFORD_CLASSIFIER_PATH, path_to_jar=STANFORD_NER_JAR_PATH) sn text_enc = text.encode('ascii', errors='ignore').decode('utf-8') ner_tagged = sn.tag(text_enc.split()) print(ner_tagged) named_entities = [] temp_entity_name = '' temp_named_entity = None for term, tag in ner_tagged: if tag != 'O': temp_entity_name = ' '.join([temp_entity_name, term]).strip() temp_named_entity = (temp_entity_name, tag) else: if temp_named_entity: named_entities.append(temp_named_entity) temp_entity_name = '' temp_named_entity = None print(named_entities) c = Counter([item[1] for item in named_entities]) c.most_common() """ Explanation: NER with Stanford NLP Stanford’s Named Entity Recognizer is based on an implementation of linear chain Conditional Random Field (CRF) sequence models. Prerequisites: Download the official Stanford NER Tagger from here, which seems to work quite well. You can try out a later version by going to this website This model is only trained on instances of PERSON, ORGANIZATION and LOCATION types. The model is exposed through nltk wrappers. End of explanation """ from nltk.parse import CoreNLPParser ner_tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner') ner_tagger import nltk tags = list(ner_tagger.raw_tag_sents(nltk.sent_tokenize(text))) tags = [sublist[0] for sublist in tags] tags = [word_tag for sublist in tags for word_tag in sublist] print(tags) named_entities = [] temp_entity_name = '' temp_named_entity = None for term, tag in tags: if tag != 'O': temp_entity_name = ' '.join([temp_entity_name, term]).strip() temp_named_entity = (temp_entity_name, tag) else: if temp_named_entity: named_entities.append(temp_named_entity) temp_entity_name = '' temp_named_entity = None print(named_entities) c = Counter([item[1] for item in named_entities]) c.most_common() """ Explanation: NER with Stanford CoreNLP NLTK is slowly deprecating the old Stanford Parsers in favor of the more active Stanford Core NLP Project. It might even get removed after nltk version 3.4 so best to stay updated. Details: https://github.com/nltk/nltk/issues/1839 Step by Step Tutorial here: https://github.com/nltk/nltk/wiki/Stanford-CoreNLP-API-in-NLTK Sadly a lot of things have changed in the process so we need to do some extra effort to make it work! Get CoreNLP from here After you download, go to the folder and spin up a terminal and start the Core NLP Server locally E:\&gt; java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -preload tokenize,ssplit,pos,lemma,ner,parse,depparse -status_port 9000 -port 9000 -timeout 15000 If it runs successfully you should see the following messages on the terminal E:\stanford\stanford-corenlp-full-2018-02-27&gt;java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -preload tokenize,ssplit,pos,lemma,ner,parse,depparse -status_port 9000 -port 9000 -timeout 15000 [main] INFO CoreNLP - --- StanfordCoreNLPServer#main() called --- [main] INFO CoreNLP - setting default constituency parser [main] INFO CoreNLP - warning: cannot find edu/stanford/nlp/models/srparser/englishSR.ser.gz [main] INFO CoreNLP - using: edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz instead [main] INFO CoreNLP - to use shift reduce parser download English models jar from: [main] INFO CoreNLP - http://stanfordnlp.github.io/CoreNLP/download.html [main] INFO CoreNLP - Threads: 4 [main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize [main] INFO edu.stanford.nlp.pipeline.TokenizerAnnotator - No tokenizer type provided. Defaulting to PTBTokenizer. [main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit [main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos [main] INFO edu.stanford.nlp.tagger.maxent.MaxentTagger - Loading POS tagger from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [1.4 sec]. [main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma [main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner [main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [1.9 sec]. [main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [2.0 sec]. [main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.8 sec]. [main] INFO edu.stanford.nlp.time.JollyDayHolidays - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1. [main] INFO edu.stanford.nlp.time.TimeExpressionExtractorImpl - Using following SUTime rules: edu/stanford/nlp/models/sutime/defs.sutime.txt,edu/stanford/nlp/models/sutime/english.sutime.txt,edu/stanford/nlp/models/sutime/english.holidays.sutime.txt [main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - TokensRegexNERAnnotator ner.fine.regexner: Read 580641 unique entries out of 581790 from edu/stanford/nlp/models/kbp/regexner_caseless.tab, 0 TokensRegex patterns. [main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - TokensRegexNERAnnotator ner.fine.regexner: Read 4857 unique entries out of 4868 from edu/stanford/nlp/models/kbp/regexner_cased.tab, 0 TokensRegex patterns. [main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - TokensRegexNERAnnotator ner.fine.regexner: Read 585498 unique entries from 2 files [main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator parse [main] INFO edu.stanford.nlp.parser.common.ParserGrammar - Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... done [4.6 sec]. [main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator depparse [main] INFO edu.stanford.nlp.parser.nndep.DependencyParser - Loading depparse model: edu/stanford/nlp/models/parser/nndep/english_UD.gz ... [main] INFO edu.stanford.nlp.parser.nndep.Classifier - PreComputed 99996, Elapsed Time: 22.43 (s) [main] INFO edu.stanford.nlp.parser.nndep.DependencyParser - Initializing dependency parser ... done [24.4 sec]. [main] INFO CoreNLP - Starting server... [main] INFO CoreNLP - StanfordCoreNLPServer listening at /0:0:0:0:0:0:0:0:9000 End of explanation """
4dsolutions/Python5
SUBPLOTS_PYT_DS_SAISOFT.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-white') import numpy as np """ Explanation: PYT-DS: Subplots in Matplotlib The VanderPlas Syllabus is one of the more useful and core to this course in many ways. Jake VanderPlas has been a key player in helping to promote open source. He's an astronomer by training. Lets remember that the Space Telescope Institute, home for the Hubble space telescope, in terms of management and data wrangling, has been a big investor in Python technologies, as well as matplotlib. It was my great privilege to deliver a Python training at SScI in behalf of Holdenweb, a Steve Holden company. Links: HTML The Notebooks End of explanation """ fig = plt.figure("main", figsize=(5,5)) # name or int id optional, as is figsize ax1 = fig.add_axes([0.1, 0.5, 0.8, 0.4], xticklabels=[], ylim=(-1.2, 1.2)) # no x axis tick marks ax2 = fig.add_axes([0.1, 0.1, 0.8, 0.4], ylim=(-1.2, 1.2)) x = np.linspace(0, 10) ax1.plot(np.sin(x)) _ = ax2.plot(np.cos(x)) # assign to dummy variable to suppress text output plt.figure? """ Explanation: The first distinction to make is between Figure, which is the outer frame of a canvas, and the rectangular XY grids or coordinate systems we place within the figure. XY grid objects are known as "axes" (plural) and most of the attributes we associate with plots are actually connected to axes. What may be confusing to the new user is that plt (pyplot) keeps track of which axes we're working on, so we have ways of communicating with axes through plt which obscures the direct connection twixt axes and their attributes. Below we avoid using plt completely except for initializing our figure, and manage to get two sets of axes (two plots inside the same figure). End of explanation """ for i in range(1, 7): plt.subplot(2, 3, i) plt.text(0.5, 0.5, str((2, 3, i)), fontsize=18, ha='center') plt.xticks([]) # get rid of tickmarks on x axis plt.yticks([]) # get rid of tickmarks on y axis """ Explanation: Here's subplot in action, creating axes automatically based on how many rows and columns we specify, followed by a sequence number i, 1 through however many (in this case six). Notice how plt is keeping track of which subplot axes are current, and we talk to said axes through plt. End of explanation """ for i in range(1, 7): plt.subplot(2, 3, i) plt.text(0.5, 0.5, str((2, 3, i)), fontsize=18, ha='center') # synonymous. gca means 'get current axes' plt.gca().get_yaxis().set_visible(False) plt.gca().axes.get_xaxis().set_visible(False) # axes optional, self referential plt.gcf().subplots_adjust(hspace=0.1, wspace=0.1) # get current figure, adjust spacing """ Explanation: Here we're talking to the axes objects more directly by calling "get current axes". Somewhat confusingly, the instances return have an "axes" attribute which points to the same instance, a wrinkle I explore below. Note the slight difference between the last two lines. End of explanation """ from PIL import Image # Image is a module! plt.subplot(1, 2, 1) plt.xticks([]) # get rid of tickmarks on x axis plt.yticks([]) # get rid of tickmarks on y axis im = Image.open("face.png") plt.imshow(im) plt.subplot(1, 2, 2) plt.xticks([]) # get rid of tickmarks on x axis plt.yticks([]) # get rid of tickmarks on y axis # rotate 180 degrees rotated = im.transpose(Image.ROTATE_180) plt.imshow(rotated) _ = plt.gcf().tight_layout() """ Explanation: LAB: You might need to install pillow to get the code cells to work. Pillow is a Python 3 fork of PIL, the Python Imaging Library, still imported using that name. conda install pillow from the most compatible repo for whatever Anaconda environment you're using would be one way to get it. Using pip would be another. The face.png binary is in your course folder for this evening. Question: Might we using axes to show images? Answer: Absolutely, as matplotlib axes have an imshow method. End of explanation """ # plt.style.use('classic') vegetables = ["cucumber", "tomato", "lettuce", "asparagus", "potato", "wheat", "barley"] farmers = ["Farmer Joe", "Upland Bros.", "Smith Gardening", "Agrifun", "Organiculture", "BioGoods Ltd.", "Cornylee Corp."] harvest = np.array([[0.8, 2.4, 2.5, 3.9, 0.0, 4.0, 0.0], [2.4, 0.0, 4.0, 1.0, 2.7, 0.0, 0.0], [1.1, 2.4, 0.8, 4.3, 1.9, 4.4, 0.0], [0.6, 0.0, 0.3, 0.0, 3.1, 0.0, 0.0], [0.7, 1.7, 0.6, 2.6, 2.2, 6.2, 0.0], [1.3, 1.2, 0.0, 0.0, 0.0, 3.2, 5.1], [0.1, 2.0, 0.0, 1.4, 0.0, 1.9, 6.3]]) fig, ax = plt.subplots() im = ax.imshow(harvest) # We want to show all ticks... ax.set_xticks(np.arange(len(farmers))) ax.set_yticks(np.arange(len(vegetables))) # ... and label them with the respective list entries ax.set_xticklabels(farmers) ax.set_yticklabels(vegetables) # Rotate the tick labels and set their alignment. plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor") # Loop over data dimensions and create text annotations. for i in range(len(vegetables)): for j in range(len(farmers)): text = ax.text(j, i, harvest[i, j], ha="center", va="center", color="y") ax.set_title("Harvest of local farmers (in tons/year)") fig.tight_layout() """ Explanation: The script below, borrowed from the matplotlib gallery, shows another common idiom for getting a figure and axes pair. Call plt.suplots with no arguments. Then talk to ax directly, for the most part. We're also rotating the x tickmark labels by 45 degrees. Fancy! Uncommenting the use('classic') command up top makes a huge difference in the result. I'm still trying to figure that out. End of explanation """
google/data-pills
pills/GA/[DATA_PILL]_[GA360]_Conversion_Blockers.ipynb
apache-2.0
# Import all necessary libs from google.colab import auth import pandas as pd import numpy as np from matplotlib import pyplot as plt from IPython.display import display, HTML # Authenticate the user to query datasets in Google BigQuery auth.authenticate_user() %matplotlib inline """ Explanation: Copyright 2021 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Important This content are intended for educational and informational purposes only. Conversion Blockers Analysis <br> In this analysis we will be looking into main user characteristics captured by Google Analytics which can affect website UX and how they impact e-commerce transaction rate. <br> Key notes / assumptions <br> For the following analysis, we will call specific data properties (i.e. Browser version) a FEATURE, and each value of a feature (i.e. <i>Chrome V10.1</i>), a LABEL Step 1: Setup Install all dependencies and authorize bigQuery access End of explanation """ #@title Define the data source in BigQuery: project_id = 'bigquery-public-data' #@param dataset_name = 'google_analytics_sample' #@param table_name = 'ga_sessions_*'#@param start_date = '2014-10-01'#@param {type:"date"} end_date = '2019-12-12'#@param{type:"date"} billing_project_id = 'my-project' #@param """ Explanation: Define analysis parameters End of explanation """ #assemble dynamic content dictionary dc = {} dc['project_id'] = project_id dc['dataset_name'] = dataset_name dc['table_name'] = table_name dc['start_date'] = start_date.replace('-','') dc['end_date'] = end_date.replace('-','') #render final query function def render_final_query(dc, display = False): q1 = ''' #fetch # of transaction, sessions and transaction rate for each feature value WITH t0 AS (SELECT {feature} AS feature, SUM(IFNULL(sessions.totals.transactions, 0)) AS transactions, COUNT(sessions.visitStartTime) AS count_sessions, SUM(IFNULL(sessions.totals.transactions, 0))/COUNT(sessions.visitStartTime) AS transaction_rate FROM `{project_id}.{dataset_name}.{table_name}` as sessions, UNNEST(hits) AS hits WHERE hits.hitNumber = 1 AND date BETWEEN '{start_date}' AND '{end_date}' GROUP BY 1 ), #calculate % of total sessions of each feature value and global (avg) transaction rate t1 AS ( SELECT *, SUM(count_sessions) OVER() AS total_sessions, SUM(transactions) OVER() AS total_transaction, AVG(transaction_rate) OVER() AS average_transaction_rate, count_sessions/SUM(count_sessions) OVER() AS sessions_percentage FROM t0 ORDER BY transaction_rate ) #limit results to only values that represent over 2% of all sessions #and, for remaining lines evaluate if they are bellow stdev limit SELECT *, IF(transaction_rate < average_transaction_rate * 0.2, true, false) AS bellow_limit from t1 WHERE sessions_percentage > 0.01 '''.format(**dc) if display: print('Final BigQuery SQL:') print(q1) return q1 #run bigQuery query function def run_big_query(q): return pd.io.gbq.read_gbq(q, project_id=billing_project_id, verbose=False, dialect='standard') """ Explanation: Step 2: Create analysis building blocks On the following coding blocks, we will create functions that will allow us to easily run the analysis multiple times, one for each feature Create query builder function based on tamplate End of explanation """ def plot_graph(df, title): #define column colors: colors = [] for index, row in df.iterrows(): bellow_limit = df['bellow_limit'][index] if(bellow_limit): colors.append('r') #set color to red else: colors.append('b') #set color to blue # Specify this list of colors as the `color` option to `plot`. df.plot(x='feature', y='transaction_rate', kind='bar', stacked=False, color = colors, title = title, yticks=[]) """ Explanation: Create function to Display Query results in bar chart End of explanation """ #uncomment each line to enable that analysis features = [ ("Operating System","CONCAT(sessions.device.operatingSystem, ' ', sessions.device.operatingSystemVersion)"), ("Browser","CONCAT( sessions.device.browser, ' ', sessions.device.browserversion)"), ("Language","sessions.device.language"), #("Device Type","sessions.device.deviceCategory"), #("Country","sessions.geoNetwork.country"), #("Region","sessions.geoNetwork.region"), #("City","sessions.geoNetwork.city"), #("Landing Page","CONCAT(hits.page.hostname, hits.page.pagePath)"), #("Screen Pixels (e5)","IF(ARRAY_LENGTH(SPLIT(sessions.device.screenResolution,'x')) = 2,ROUND(CAST(SPLIT(sessions.device.screenResolution,'x')[OFFSET(0)] AS INT64) * CAST(SPLIT(sessions.device.screenResolution,'x')[OFFSET(1)] AS INT64)/100000), Null)") ] #for each feature Tuple for item in features: #define custom values for SQL Query generation dc['feature'] = item[1] #generate sql q = render_final_query(dc, display=True) # REMOVE LINE BELLOW to execute query (this might result in bigQuery costs) #run query in BQ df = run_big_query(q) #print query results print("Results for " + item[0]) display(df) print(" ") #plot graph plot_graph(df, item[0]) """ Explanation: Step 3: Run entire pipeline for each feature and plot results End of explanation """
linamnt/studyGroup
lessons/python/intro/intro_data_analysis_AE.ipynb
apache-2.0
4 + 4 4**2 # 4 to the power of 2 3*5; # semi-colon suppresses output """ Explanation: Data analysis in Python Contributors: This notebook combines two notebooks (with minor modifications by Amanda Easson) from previous UofT Coders sessions: Intro Python (authors: Madeleine Bonsma-Fisher, heavily borrowing from Lina Tran and Charles Zhu): https://github.com/UofTCoders/studyGroup/blob/gh-pages/lessons/python/intro/IntroPython-MB.ipynb Pandas (authors: Joel Ostblom and Luke Johnston): https://github.com/UofTCoders/studyGroup/blob/gh-pages/lessons/python/intro-data-analysis/from-spreadsheets-to-pandas-extended.ipynb Additional numpy content was added by Amanda Easson, primarily based off of the Python Data Science Handbook by Jake VanderPlas: https://jakevdp.github.io/PythonDataScienceHandbook/ Table of Contents Preamble Motivation Conceptual understanding Programming basics Variable assignment Assigning multiple values to variables Lists Dictionaries Comparisons Indexing and slicing If statements For loops List comprehensions Functions Using functions Packages How to get help Data analysis with pandas Subsetting data numpy: numerical python Data visualization Plotting with pandas Plotting with seaborn Resources to learn more Preamble <a class="anchor" id="preamble"></a> This is a brief tutorial for anyone who is interested in how Python can facilitate their data analyses. The tutorial is aimed at people who currently use a spreadsheet program as their primary data analyses tool, and that have no previous programming experience. If you want to code along, a simple way to install Python is to follow these instructions, but I encourage you to just read through this tutorial on a conceptual level at first. Motivation <a class="anchor" id="motivation"></a> Spreadsheet software is great for viewing and entering small data sets and creating simple visualizations fast. However, it can be tricky to create publication-ready figures, automate reproducible analysis workflows, perform advanced calculations, and clean data sets robustly. Even when using a spreadsheet program to record data, it is often beneficial to pick up some basic programming skills to facilitate the analyses of that data. Conceptual understanding <a class="anchor" id="conceptual-understanding"></a> Spreadsheet programs, such as MS Excel and Libre/OpenOffice, have their functionality sectioned into menus. In programming languages, all the functionality is accessed by typing the name of functions directly instead of finding the functions in the menu hierarchy. Initially this might seem intimidating and non-intuitive for people who are used to the menu-driven approach. However, think of it as learning a new natural language. Initially, you will slowly string together sentences by looking up individual words in the dictionary. As you improve, you will only reference the dictionary occasionally since you already know most of the words. Practicing the language whenever you can, and receiving immediate feedback, is often the fastest way to learn. Sitting at home trying to learn every word in the dictionary before engaging in conversation, is destined to kill the joy of learning any language, natural or formal. In my experience, learning programming is similar to learning a foreign language, and you will often learn the most from just trying to do something and receiving feedback from the computer! When there is something you can't wrap you head around, or if you are actively trying to find a new way of expressing a thought, then look it up, just as you would with a natural language. Programming basics <a class="anchor" id="programming-basics"></a> Just like in spreadsheet software, the basic installation of Python includes fundamental math operations, e.g. adding numbers together: End of explanation """ a = 5 a * 2 my_variable_name = 4 a - my_variable_name """ Explanation: Variable assignment <a class="anchor" id="variable-assignment"></a> It is possible to assign values to variables: End of explanation """ b = 'Hello' c = 'universe' b + c """ Explanation: Variables can also hold more data types than just numbers, for example a sequence of characters surrounded by single or double quotation marks (called a string). In Python, it is intuitive to append string by adding them together: End of explanation """ b + ' ' + c print(b) print(type(b)) """ Explanation: A space can be added to separate the words: End of explanation """ list_of_things = [1, 55, 'Hi', ['apple', 'orange', 'banana']] list_of_things list_of_things.append('Toronto') list_of_things list_of_things.remove(55) list_of_things print(list_of_things + list_of_things) """ Explanation: Assigning multiple values to variables <a class="anchor" id="assigning-multiple-values-to-variables"></a> Lists <a class="anchor" id="lists"></a> Variables can also store more than one value, for example in a list of values: End of explanation """ tup1 = (1,5,2,1) print(tup1) tup1[0] = 3 """ Explanation: A "tuple" is an immutable list (nothing can be added or subtracted) whose elements also can't be reassigned. End of explanation """ fruit_colors = {'tangerine':'orange', 'banana':'yellow', 'apple':['green', 'red']} fruit_colors['banana'] fruit_colors['apple'] list(fruit_colors.keys()) participant_info = {'ID': 50964, 'age': 15.25, 'sex': 'F', 'IQ': 105, 'medications': None, 'questionnaires': ['Vineland', 'CBCL', 'SRS']} print(participant_info['ID']) print(participant_info['questionnaires']) """ Explanation: Dictionaries <a class="anchor" id="dictionaries"></a> In a dictionary, values are paired with names, called keys. These are not stored in any specific order, and are therefore accessed by the key name rather than a number. End of explanation """ 1 == 1 1 == 0 1 > 0 'hey' == 'Hey' 'hey' == 'hey' a >= 2 * 2 # we defined a = 5 above """ Explanation: Comparisons <a class="anchor" id="comparisons"></a> Python can compare values and assess whether the expression is true or false. End of explanation """ #indexing in Python starts at 0, not 1 (like in Matlab or Oracle) fruits = ['apples', 'oranges', 'bananas'] print(fruits[0]) print(fruits[1]) # strings are just a particular kind of list s = 'This_is_a_string.' print(s[0]) # use -1 to get the last element print(fruits[-1]) print(fruits[-2]) # to get a slice of the string use the : symbol # s[0:x] will print up to the xth element, or the element with index x-1 print(s[0:4]) print(s[:4]) s = 'This_is_a_string.' print(s[5:7]) print(s[7:]) print(s[7:len(s)]) """ Explanation: Indexing and Slicing <a class="anchor" id="index_slice"></a> End of explanation """ s2 = [19034, 23] # You will always need to start with an 'if' line # You do not need the elif or else statements # You can have as many elif statements as needed if type(s2) == str: print('s2 is a string') elif type(s2) == int: print('s2 is an integer') elif type(s2) == float: print('s2 is a float') else: print('s2 is not a string, integer or float') """ Explanation: If Statements <a class="anchor" id="if_statements"></a> End of explanation """ nums = [23, 56, 1, 10, 15, 0] # in this case, 'n' is a dummy variable that will be used by the for loop # you do not need to assign it ahead of time for n in nums: if n%2 == 0: print('even') else: print('odd') # for loops can iterate over strings as well vowels = 'aeiou' for vowel in vowels: print(vowel) """ Explanation: For Loops <a class="anchor" id="for_loops"></a> End of explanation """ my_colours = ['pink', 'purple', 'blue', 'green', 'orange'] my_light_colours = ['light ' + colour for colour in my_colours] print(my_light_colours) """ Explanation: List comprehensions <a class="anchor" id="list_comprehensions"></a> Format: mylist = [altered_thing for thing in list_of_things] End of explanation """ # always use descriptive naming for functions, variables, arguments etc. def sum_of_squares(num1, num2): """ Input: two numbers Output: the sum of the squares of the two numbers """ ss = num1**2 + num2**2 return(ss) # The stuff inside """ """ is called the "docstring". It can be accessed by typing help(sum_of_squares) help(sum_of_squares) print(sum_of_squares(4,2)) # the return statement in a function allows us to store the output of a function call in a variable for later use ss1 = sum_of_squares(5,5) print(ss1) """ Explanation: Functions <a class="anchor" id="functions"></a> End of explanation """ import numpy as np """ Explanation: When we start working with spreadsheet-like data, we will see that these comparisons are really useful to extract subsets of data, for example observations from a certain time period. Using functions <a class="anchor" id="using-functions"></a> To access additional functionality in a spreadsheet program, you need to click the menu and select the tool you want to use. All charts are in one menu, text layout tools in another, data analyses tools in a third, and so on. Programming languages such as Python have so many tools and functions so that they would not fit in a menu. Instead of clicking File -&gt; Open and chose the file, you would type something similar to file.open('&lt;filename&gt;') in a programming language. Don't worry if you forget the exact expression, it is often enough to just type the few first letters and then hit <kbd>Tab</kbd>, to show the available options, more on that later. Packages <a class="anchor" id="packages"></a> Since there are so many esoteric tools and functions available in Python, it is unnecessary to include all of them with the basics that are loaded by default when you start the programming language (it would be as if your new phone came with every single app preinstalled). Instead, more advanced functionality is grouped into separate packages, which can be accessed by typing import &lt;package_name&gt; in Python. You can think of this as telling the program which menu items you want to use (similar to how Excel hides the Developer menu by default since most people rarely use it and you need activate it in the settings if you want to access its functionality). Some packages needs to be downloaded before they can be used, just like downloading an addon to a browser or mobile phone. Just like in spreadsheet software menus, there are lots of different tools within each Python package. For example, if I want to use numerical Python functions, I can import the numerical python module, numpy. I can then access any function by writing numpy.&lt;function_name&gt;. If the package name is long, it is common to import the package "as" another name, like a nickname. For instance, numpy is often imported as np. All you have to do is type import numpy as np. Thiis makes it faster to type and saves a bit of work and also makes the code a bit easier to read. End of explanation """ # use a package by importing it, you can also give it a "nickname", in this case 'np' import numpy as np np.mean? array = np.arange(15) lst = list(range(15)) print(array) print(lst) print(type(array)) print(type(lst)) # numpy arrays allow for vectorized calculations print(array*2) print(lst*2) array = array.reshape([5,3]) print(array) # for each row, take the mean across the 3 columns (using axis=1) array.mean(axis=1) # max value in each column array.max(axis=0) list2array = np.array(lst) type(list2array) list2array array2d = np.array([range(i, i + 3) for i in [2, 4, 6]]) array2d my_zeros = np.zeros((10,1), dtype=int) my_zeros np.ones((5,2), dtype=float) np.full((3,3), np.pi) # Create an array filled with a linear sequence: start, stop, step # up to stop value - 1 # similar to "range" print(np.arange(0, 20, 2)) print(list(range(0, 20, 2))) # linspace: evenly spaced values: start, stop, number of values # up to stop value print(np.linspace(0, 20.2, 11)) # uniformly distributed random numbers between 0-1 np.random.random((3,3)) # normal distribution # mean, standard deviation, array size np.random.seed(0) my_array = np.random.normal(0, 1, (10000,1)) print(my_array.mean()) print(my_array.std()) iq = np.random.normal(100, 15, (30,1)) print(np.mean(iq)) print(np.median(iq)) print(np.max(iq)) print(np.min(iq)) print('\n') print(iq.mean()) print(iq.max()) print(iq.median()) x = np.random.normal(100,15,(3,4,5)) print("# dimensions:", x.ndim) print("Shape:", x.shape) print("Size:", x.size) # indexing x[0,0,:] # change elements x[0,0,2] = 108 x[0,0,2] x = np.arange(10) print(x) print("First five elements:", x[:5]) # first five elements print("elements after index 5:", x[5:]) # elements after index 5 print("every other element:", x[::4]) # every other element print("every other element, starting at index 3:", x[3::2]) # every other element, starting at index 3 print("all elements, reversed:",x[::-1]) # all elements, reversed # copying arrays print(x) x2 = x x2[0] = 100 print(x2) print(x) x3 = np.arange(10) x4 = x3.copy() x4[0] = 100 print(x4) print(x3) # concatenate arrays x1 = np.array([1,2,3]) x2 = np.array([4,5,6]) x3 = np.concatenate([x1,x2]) print(x3) # or stack vertically or horizontally # make sure dimensions agree x4 = np.vstack([x1,x2]) print(x4) x5 = np.hstack([x1,x2]) print(x5) # numpy arithmetic x = np.arange(5) print(x) print(x+5) print(x*2) print(x/2) print(x//2) # "floor" division, i.e. round down to nearest integer # alternatively: print('\n') print(np.add(x,5)) print(np.multiply(x,2)) print(np.divide(x,2)) print(np.floor_divide(x,2)) print(np.power(x,3)) print(np.mod(x,2)) # add all elements of x print(np.add.reduce(x)) print(np.sum(x)) # add all elements of x cumulatively print(np.add.accumulate(x)) # broadcasting a = np.eye(5) b = np.ones((5,1)) b = np.ones((5,5)) print(a) print(b) print(a+b) # broadcasting: practical example: mean-centering data data = np.random.normal(100, 15, (30, 6)) print(data.shape) data_mean = data.mean(axis=0) print(data_mean.shape) data_mean_centered = data - data_mean data_mean_centered.mean(axis=0) """ Explanation: How to get help <a class="anchor" id="how-to-get-help"></a> Once you start out using Python, you don't know what functions are availble within each package. Luckily, in the Jupyter Notebook, you can type numpy.<kbd>Tab</kbd> (that is numpy + period + tab-key) and a small menu will pop up that shows you all the available functions in that module. This is analogous to clicking a 'numpy-menu' and then going through the list of functions. As I mentioned earlier, there are plenty of available functions and it can be helpful to filter the menu by typing the initial letters of the function name. To get more info on the function you want to use, you can type out the full name and then press <kbd>Shift + Tab</kbd> once to bring up a help dialogue and again to expand that dialogue. If you need a more extensive help dialog, you can click <kbd>Shift + Tab</kbd> four times or just type ? after the function name. numpy: numerical python <a class="anchor" id="numpy"></a> End of explanation """ import pandas as pd # this will read in a csv file into a pandas DataFrame # this csv has data of country spending on healthcare #data = pd.read_csv('health.csv', header=0, index_col=0, encoding="ISO-8859-1") # load several datasets data = pd.read_csv('https://tinyurl.com/uoftcode-health', header=0, index_col=0, encoding="ISO-8859-1") iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv') # the .head() function will allow us to look at first few lines of the dataframe data.head(10) # default is 5 rows # by default, rows are indicated first, followed by the column: [row, column] data.loc['Canada', '2008'] # you can also slice a dataframe data.loc['Canada':'Chile', '1999':'2001'] %matplotlib inline import matplotlib.pyplot as plt # the .plot() function will create a simple graph for you to quickly visualize your data data.loc['Denmark'].plot() data.loc['Canada'].plot() data.loc['India'].plot() plt.legend() plt.show() iris.head() iris.shape # rows, columns iris.columns # names of columns """ Explanation: pandas ("panel data"): python data analysis library <a class="anchor" id="data-analysis-with-pandas"></a> The Python package that is most commonly used to work with spreadsheet-like data is called pandas, the name is derived from "panel data", an econometrics term for multidimensional structured data sets. Data are easily loaded into pandas from .csv or other spreadsheet formats. The format pandas uses to store this data is called a <b>data frame</b>. To have a quick peak at the data, I can type df.head(&lt;number_of_rows&gt;) (where "df" is the name of the dataframe). Notice that I do not have to use the pd.-syntax for this. iris is now a pandas data frame, and all pandas data frames have a number of built-in functions (called <b>methods</b>) that can be appended directly to the data frame instead of by calling pandas separately. End of explanation """ iris['sepal_length'] """ Explanation: To select a column we can index the data frame with the column name. End of explanation """ iris['sepal_length_x2'] = iris['sepal_length'] * 2 iris['sepal_length_x2'] """ Explanation: The output is here rendered slightly differently from before, because when we are looking only at one column, it is no longer a data frame, but a <b>series</b>. The differences are not important for this lecture, so this is all you need to know about that for now. We could now create a new column if we wanted: End of explanation """ iris = iris.drop('sepal_length_x2', axis=1) # axis 0 = index, axis 1 = columns """ Explanation: And delete that column again: End of explanation """ iris['sepal_length'].mean() iris['sepal_length'].median() """ Explanation: There are some built-in methods that make it convenient to calculate common operation on data frame columns. End of explanation """ iris.mean() """ Explanation: It is also possible to use these methods on all columns at the same time without having to type the same thing over and over again. End of explanation """ iris.describe() """ Explanation: Similarly, you can get a statistical summary of the data frame: End of explanation """ iris['species'].unique() """ Explanation: Subsetting data <a class="anchor" id="subsetting-data"></a> A common task is to subset the data into only those observations that match a criteria. For example, we might be interest in only studying one specific species. First let's find out how many different species there are in our data set: End of explanation """ iris['species'] == 'setosa' iris[iris['species'] == 'setosa'] """ Explanation: Let's arbitrarily choose setosa as the one to study! To select only observations from this species in the original data frame, we index the data frame with a comparison: End of explanation """ iris[iris['species'] == 'setosa'].mean(axis=0) """ Explanation: Now we can easily perform computation on this subset of the data: End of explanation """ iris.groupby('species').mean() """ Explanation: We could also compare all groups within the data against each other, by using the <b>split-apply-combine</b> workflow. This splits data into groups, applies an operation on each group, and then combines the results into a table for display. In pandas, we split into groups with the group_by command and then we apply an operation to the grouped data frame, e.g. .mean(). End of explanation """ iris.groupby('species').size() """ Explanation: We can also easily count the number of observations in each group: End of explanation """ # Prevent plots from popping up in a new window %matplotlib inline species_comparison = iris.groupby('species').mean() # Assign to a variable species_comparison.plot(kind='bar') """ Explanation: Data visualization <a class="anchor" id="data-visualization"></a> We can see that there are clear differences between species, but they might be even clearer if we display them graphically in a chart. Plotting with pandas <a class="anchor" id="plotting-with-pandas"></a> Pandas interfaces with one of Python's most powerful data visualization libraries, matplotlib, to enable simple visualizations at minimal effort. End of explanation """ species_comparison.T.plot(kind='bar') """ Explanation: Depending on what you are interesting in showing, it could be useful to have the species as the different colors and the columns along the x-axis. We can easily achieve this by transposing (.T) our data frame. End of explanation """ import seaborn as sns sns.swarmplot('species', 'sepal_length', data = iris) """ Explanation: Plotting with seaborn <a class="anchor" id="plotting-with-seaborn"></a> Another plotting library is seaborn, which also builds upon matplotlib, and extends it by adding new styles, additional plot types and some commonly performed statistical measures. End of explanation """ sns.set(style='ticks', context='talk', rc={'figure.figsize':(8, 5),'axes.spines.right':False, 'axes.spines.top':False}) # This applies to all subseque # styles: darkgrid, whitegrid, dark, white, and ticks # contexts: paper, notebook, talk, poster #sns.axes_style() sns.swarmplot('species', 'sepal_length', data=iris) """ Explanation: The labels on this plot look a bit small, so let's change the style we are using for plotting. End of explanation """ sns.barplot('species', 'sepal_length', data=iris) """ Explanation: We can use the same syntax to create many of the common plots in seaborn. End of explanation """ sns.violinplot('species', 'sepal_length', data = iris) """ Explanation: Bar charts are a common, but not very useful way of presenting data aggregations (e.g. the mean). A better way is to use the points as we did above, or a plot that capture the distribution of the data, such as a boxplot, or a violin plot: End of explanation """ sns.violinplot('species', 'sepal_length', data=iris, inner=None) sns.swarmplot('species', 'sepal_length', data=iris, color='black', size=4) """ Explanation: We can also combine two plots, by simply adding the two line after each other. There is also a more advanced figure interface available in matplotlib to explicitly indicate which figure and axes you want the plot to appear in, but this is outside the scope of this tutorial (more info here and here). End of explanation """ sns.lmplot('sepal_width', 'sepal_length', data=iris, size=6) """ Explanation: Instead of plotting one categorical variable vs a numerical variable, we can also plot two numerical values against each other to explore potential correlations between these two variables: End of explanation """ sns.lmplot('sepal_width', 'sepal_length', data=iris, fit_reg=False, size=6) """ Explanation: There is a regression line plotted by default to indicate the trend in the data. Let's turn that off for now and look at only the data points. End of explanation """ sns.lmplot('sepal_width', 'sepal_length', data=iris, hue='species', fit_reg=False, size=6) """ Explanation: There appears to be some structure in this data. At least two clusters of points seem to be present. Let's color according to species and see if that explains what we see. End of explanation """ sns.lmplot('sepal_width', 'sepal_length', data=iris, hue='species', fit_reg=True, size=6) """ Explanation: Now we can add back the regression line, but this time one for each group. End of explanation """ sns.pairplot(iris, hue="species", size=3.5) """ Explanation: Instead of creating a plot for each variable against each other, we can easily create a grid of subplots for all variables with a single command: End of explanation """ iris_long = pd.melt(iris, id_vars = 'species') iris_long """ Explanation: More complex visualizations <a class="anchor" id="more-complex-visualizations"></a> Many visualizations are easier to create if we first reshape our data frame into the tidy format, which is what seaborn prefers. This is also referred to as changing the data frame format from wide (many columns) to long (many rows), since it moves information from columns to rows: We can use pandas built-in melt-function to "melt" the wide data frame into the long format. The new columns will be given the names variable and value by default (see the help of melt if you would like to change these names). End of explanation """ sns.set(context='poster', style='white', rc={'figure.figsize':(10, 6), 'axes.spines.right':False, 'axes.spines.top':False}) sns.swarmplot(x='variable', y='value', hue = 'species', data=iris_long, dodge=True, palette='Set2', size=4) sns.set(context='poster', style='darkgrid', rc={'figure.figsize':(12, 6)}) # stripplot: scatterplot where one variable is categorical sns.boxplot(y='variable', x='value', hue='species', data=iris_long, color='c', ) sns.stripplot(y='variable', x='value', hue='species', data=iris_long, size=2.5, palette=['k']*3, jitter=True, dodge=True) plt.xlim([0, 10]) """ Explanation: We do not need to call groupby or mean on the long iris data frame when plotting with seaborn. Instead we control these options from seaborn with the plot type we chose (barplot = mean automatically) and the hue-parameter, which is analogous to groupby. End of explanation """
melissawm/oceanobiopython
exemplos/exemplo_6/.ipynb_checkpoints/Diagrama TS-checkpoint.ipynb
gpl-3.0
import gsw """ Explanation: Diagrama TS Vamos elaborar um diagrama TS com o auxílio do pacote gsw [https://pypi.python.org/pypi/gsw/3.0.3], que é uma alternativa em python para a toolbox gsw do MATLAB: End of explanation """ import numpy as np import matplotlib.pyplot as plt sal = np.linspace(0, 42, 100) temp = np.linspace(-2, 40, 100) s, t = np.meshgrid(sal, temp) # Abaixo usamos diretamente o resultado da biblioteca gsw: # Thermodynamic Equation Of Seawater - 2010 (TEOS-10) sigma = gsw.sigma0(s, t) # Quantidade de linhas desejada cnt = np.arange(-7, 35, 5) fig, ax = plt.subplots(figsize=(5, 5)) ax.plot(sal, temp, 'ro') # O comando abaixo faz curvas de nível com dados contour(X, Y, Z) cs = ax.contour(s, t, sigma, colors='blue', levels=cnt) # Aqui fazemos rótulos para as curvas de nível ax.clabel(cs, fontsize=9, inline=1, fmt='%2i') ax.set_xlabel('Salinity [g kg$^{-1}$]') ax.set_ylabel('Temperature [$^{\circ}$C]') #plt.plot(s,t,'ro') """ Explanation: Se você não conseguiu importar a biblioteca acima, precisa instalar o módulo gsw. Em seguida, importamos a biblioteca numpy que nos permite usar algumas funções matemáticas no python: End of explanation """
hainm/dask
notebooks/parallelize_image_filtering_workload.ipynb
bsd-3-clause
%pylab inline from scipy.ndimage import uniform_filter import dask.array as da def mean(img): "ndimage.uniform_filter with `size=51`" return uniform_filter(img, size=51) """ Explanation: Parallelize image filters with dask This notebook will show how to parallize CPU-intensive workload using dask array. A simple uniform filter (equivalent to a mean filter) from scipy.ndimage is used for illustration purposes. End of explanation """ !if [ ! -e stitched--U00--V00--C00--Z00.png ]; then wget -q https://github.com/arve0/master/raw/master/stitched--U00--V00--C00--Z00.png; fi img = imread('stitched--U00--V00--C00--Z00.png') img = (img*255).astype(np.uint8) # image read as float32, image is 8 bit grayscale imshow(img[::16, ::16]) mp = str(img.shape[0] * img.shape[1] * 1e-6 // 1) '%s Mega pixels, shape %s, dtype %s' % (mp, img.shape, img.dtype) """ Explanation: Get the image End of explanation """ # filter directly %time mean_nd = mean(img) imshow(mean_nd[::16, ::16]); """ Explanation: Initial speed Lets try the filter directly on the image. End of explanation """ img_da = da.from_array(img, chunks=img.shape) """ Explanation: With dask First, we'll create the dask array with one chunk only (chunks=img.shape). End of explanation """ %time mean_da = img_da.map_overlap(mean, depth=0).compute() imshow(mean_da[::16, ::16]); """ Explanation: depth defines the overlap. We have one chunk only, so overlap is not necessary. compute must be called to start the computation. End of explanation """ from multiprocessing import cpu_count cpu_count() """ Explanation: As we can see, the performance is the same as applying the filter directly. Now, lets chop up the image in chunks so that we can leverage all the cores in our computer. End of explanation """ img.shape, mean_da.shape, mean_nd.shape """ Explanation: We have four cores, so lets split the array in four chunks. End of explanation """ chunk_size = [x//2 for x in img.shape] img_da = da.rechunk(img_da, chunks=chunk_size) """ Explanation: Pixels in both axes are even, so we can split the array in equally sized chunks. If we had odd shapes, chunks would not be the same size (given four cpu cores). E.g. 101x101 image => 50x50 and 51x51 chunks. End of explanation """ %time mean_da = img_da.map_overlap(mean, depth=0).compute() imshow(mean_da[::16, ::16]); """ Explanation: Now, lets see if the filtering is faster. End of explanation """ size = 50 mask = np.index_exp[chunk_size[0]-size:chunk_size[0]+size, chunk_size[1]-size:chunk_size[1]+size] figure(figsize=(12,4)) subplot(131) imshow(mean_nd[mask]) # filtered directly subplot(132) imshow(mean_da[mask]) # filtered in chunks with dask subplot(133) imshow(mean_nd[mask] - mean_da[mask]); # difference """ Explanation: It is :-) If one opens the process manager, one will see that the python process is eating more then 100% CPU. As we are looking at neighbor pixels to compute the mean intensity for the center pixel, you might wonder what happens in the seams between chunks? Lets examine that. End of explanation """ %time mean_da = img_da.map_overlap(mean, depth=25).compute() figure(figsize=(12,4)) subplot(131) imshow(mean_nd[mask]) # filtered directly subplot(132) imshow(mean_da[mask]) # filtered in chunks with dask subplot(133) imshow(mean_nd[mask] - mean_da[mask]); # difference """ Explanation: To overcome this edge effect in the seams, we need to define a higher depth so that dask does the computation with an overlap. We need an overlap of 25 pixels (half the size of the neighborhood in mean). End of explanation """ img_da = da.rechunk(img_da, 1000) %time mean_da = img_da.map_overlap(mean, depth=25).compute() imshow(mean_da[::16, ::16]); """ Explanation: Edge effect is gone, nice! The dots in the difference is due to uniform_filter's limited precision. From the manual: The multi-dimensional filter is implemented as a sequence of one-dimensional uniform filters. The intermediate arrays are stored in the same data type as the output. Therefore, for output types with a limited precision, the results may be imprecise because intermediate results may be stored with insufficient precision. Lets see if we can improve the performance. As we do not get 4x speedup, there might be that computation is not only CPU-bound. Chunksize of 1000 is a good place to start. End of explanation """ '%0.1fx' % (2.7/1.24) """ Explanation: As you see, adjusting the chunk size did not affect the performance significant, though its a good idea to identify your bottleneck and adjust the chunk size accordingly. That's all! By chopping up the computation we utilized all cpu cores and got a speedup at best: End of explanation """
GoogleCloudPlatform/training-data-analyst
blogs/bqml/online_prediction.ipynb
apache-2.0
!pip install google-cloud # Reset Session after installing PROJECT = 'cloud-training-demos' # change as needed """ Explanation: Online prediction with BigQuery ML ML.Predict in BigQuery ML is primarily meant for batch predictions. What if you want to build a web application to provide online predictions? Here, I show the basic Python code to do online prediction. You can wrap this code in AppEngine or other web framework/toolkit to provide scalable, fast, online predictions. End of explanation """ %bash bq mk -d flights """ Explanation: Create a model Let's start by creating a simple prediction model to predict arrival delays of aircraft. I'll use this to illustrate the process. First, if necessary, create the BigQuery dataset that will store the output of the model. End of explanation """ %bq query CREATE OR REPLACE MODEL flights.arrdelay OPTIONS (model_type='linear_reg', input_label_cols=['arr_delay']) AS SELECT arr_delay, carrier, origin, dest, dep_delay, taxi_out, distance FROM `cloud-training-demos.flights.tzcorr` WHERE arr_delay IS NOT NULL """ Explanation: Then, do a "CREATE MODEL". This will take about <b>5 minutes</b>. End of explanation """ %bq query SELECT * FROM ml.PREDICT(MODEL flights.arrdelay, ( SELECT 'AA' as carrier, 'DFW' as origin, 'LAX' as dest, dep_delay, 18 as taxi_out, 1235 as distance FROM UNNEST(GENERATE_ARRAY(-3, 10)) as dep_delay )) """ Explanation: Batch prediction with model Once you have a trained model, batch prediction can be done within BigQuery itself. For example, to find the predicted arrival delays for a flight from DFW to LAX for a range of departure delays End of explanation """ %bq query SELECT processed_input AS input, model.weight AS input_weight FROM ml.WEIGHTS(MODEL flights.arrdelay) AS model %bq query SELECT processed_input AS input, model.weight AS input_weight, category.category AS category_name, category.weight AS category_weight FROM ml.WEIGHTS(MODEL flights.arrdelay) AS model, UNNEST(category_weights) AS category """ Explanation: Online prediction in Python The batch prediction technique above can not be used for online prediction though. Typical BigQuery queries have a latency of 1-2 seconds and that is too high for a web application. For online prediction, it is better to grab the weights and do the computation yourself. End of explanation """ def query_to_dataframe(query): import pandas as pd import pkgutil privatekey = None # pkgutil.get_data(KEYDIR, 'privatekey.json') return pd.read_gbq(query, project_id=PROJECT, dialect='standard', private_key=privatekey) """ Explanation: Here's how to do that in Python. p.s. I'm assuming that you are in an environment where you are already authenticated with Google Cloud. If not, see this article on how to access BigQuery using private keys End of explanation """ numeric_query = """ SELECT processed_input AS input, model.weight AS input_weight FROM ml.WEIGHTS(MODEL flights.arrdelay) AS model """ numeric_weights = query_to_dataframe(numeric_query).dropna() numeric_weights scaling_query = """ SELECT input, min, max, mean, stddev FROM ml.FEATURE_INFO(MODEL flights.arrdelay) AS model """ scaling_df = query_to_dataframe(scaling_query).dropna() scaling_df categorical_query = """ SELECT processed_input AS input, model.weight AS input_weight, category.category AS category_name, category.weight AS category_weight FROM ml.WEIGHTS(MODEL flights.arrdelay) AS model, UNNEST(category_weights) AS category """ categorical_weights = query_to_dataframe(categorical_query) categorical_weights.head() """ Explanation: You need to pull 3 pieces of information: * The weights for each of your numeric columns * The scaling for each of your numeric columns * The vocabulary and weights for each of your categorical columns I pull them using three separate BigQuery queries below End of explanation """ def compute_prediction(rowdict, numeric_weights, scaling_df, categorical_weights): input_values = rowdict # numeric inputs pred = 0 for column_name in numeric_weights['input'].unique(): wt = numeric_weights[ numeric_weights['input'] == column_name ]['input_weight'].values[0] if column_name != '__INTERCEPT__': #minv = scaling_df[ scaling_df['input'] == column_name ]['min'].values[0] #maxv = scaling_df[ scaling_df['input'] == column_name ]['max'].values[0] #scaled_value = (input_values[column_name] - minv)/(maxv - minv) meanv = scaling_df[ scaling_df['input'] == column_name ]['mean'].values[0] stddev = scaling_df[ scaling_df['input'] == column_name ]['stddev'].values[0] scaled_value = (input_values[column_name] - meanv)/stddev else: scaled_value = 1.0 contrib = wt * scaled_value print('col={} wt={} scaled_value={} contrib={}'.format(column_name, wt, scaled_value, contrib)) pred = pred + contrib # categorical inputs for column_name in categorical_weights['input'].unique(): category_weights = categorical_weights[ categorical_weights['input'] == column_name ] wt = category_weights[ category_weights['category_name'] == input_values[column_name] ]['category_weight'].values[0] print('col={} wt={} value={} contrib={}'.format(column_name, wt, input_values[column_name], wt)) pred = pred + wt return pred """ Explanation: With the three pieces of information in-place, you can simply do the math for linear regression: End of explanation """ rowdict = { 'carrier' : 'AA', 'origin': 'DFW', 'dest': 'LAX', 'dep_delay': -3, 'taxi_out': 18, 'distance': 1235 } print(compute_prediction(rowdict, numeric_weights, scaling_df, categorical_weights)) """ Explanation: Here is an example of the prediction code in action: End of explanation """
quantopian/research_public
notebooks/lectures/Plotting_Data/notebook.ipynb
apache-2.0
# Import our libraries # This is for numerical processing import numpy as np # This is the library most commonly used for plotting in Python. # Notice how we import it 'as' plt, this enables us to type plt # rather than the full string every time. import matplotlib.pyplot as plt """ Explanation: Graphical Representations of Data By Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, and Delaney Granizo-Mackenzie. Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public Representing data graphically can be incredibly useful for learning how the data behaves and seeing potential structure or flaws. Care should be taken, as humans are incredibly good at seeing only evidence that confirms our beliefs, and visual data lends itself well to that. Plots are good to use when formulating a hypothesis, but should not be used to test a hypothesis. We will go over some common plots here. End of explanation """ start = '2014-01-01' end = '2015-01-01' data = get_pricing(['AAPL', 'MSFT'], fields='price', start_date=start, end_date=end) data.head() """ Explanation: Getting Some Data If we're going to plot data we need some data to plot. We'll get the pricing data of Apple (AAPL) and Microsoft (MSFT) to use in our examples. Data Structure Knowing the structure of your data is very important. Normally you'll have to do a ton of work molding your data into the form you need for testing. Quantopian has done a lot of cleaning on the data, but you still need to put it into the right shapes and formats for your purposes. In this case the data will be returned as a pandas dataframe object. The rows are timestamps, and the columns are the two assets, AAPL and MSFT. End of explanation """ data.columns = [e.symbol for e in data.columns] data.head() """ Explanation: Indexing into the data with data['AAPL'] will yield an error because the type of the columns are equity objects and not simple strings. Let's change that using this little piece of Python code. Don't worry about understanding it right now, unless you do, in which case congratulations. End of explanation """ data['MSFT'].head() """ Explanation: Much nicer, now we can index. Indexing into the 2D dataframe will give us a 1D series object. The index for the series is timestamps, the value upon index is a price. Similar to an array except instead of integer indecies it's times. End of explanation """ # Plot a histogram using 20 bins plt.hist(data['MSFT'], bins=20) plt.xlabel('Price') plt.ylabel('Number of Days Observed') plt.title('Frequency Distribution of MSFT Prices, 2014'); """ Explanation: Histogram A histogram is a visualization of how frequent different values of data are. By displaying a frequency distribution using bars, it lets us quickly see where most of the observations are clustered. The height of each bar represents the number of observations that lie in each interval. You can think of a histogram as an empirical and discrete Probability Density Function (PDF). End of explanation """ # Remove the first element because percent change from nothing to something is NaN R = data['MSFT'].pct_change()[1:] # Plot a histogram using 20 bins plt.hist(R, bins=20) plt.xlabel('Return') plt.ylabel('Number of Days Observed') plt.title('Frequency Distribution of MSFT Returns, 2014'); """ Explanation: Returns Histogram In finance rarely will we look at the distribution of prices. The reason for this is that prices are non-stationary and move around a lot. For more info on non-stationarity please see this lecture. Instead we will use daily returns. Let's try that now. End of explanation """ # Remove the first element because percent change from nothing to something is NaN R = data['MSFT'].pct_change()[1:] # Plot a histogram using 20 bins plt.hist(R, bins=20, cumulative=True) plt.xlabel('Return') plt.ylabel('Number of Days Observed') plt.title('Cumulative Distribution of MSFT Returns, 2014'); """ Explanation: The graph above shows, for example, that the daily returns of MSFT were above 0.03 on fewer than 5 days in 2014. Note that we are completely discarding the dates corresponding to these returns. IMPORTANT: Note also that this does not imply that future returns will have the same distribution. Cumulative Histogram (Discrete Estimated CDF) An alternative way to display the data would be using a cumulative distribution function, in which the height of a bar represents the number of observations that lie in that bin or in one of the previous ones. This graph is always nondecreasing since you cannot have a negative number of observations. The choice of graph depends on the information you are interested in. End of explanation """ plt.scatter(data['MSFT'], data['AAPL']) plt.xlabel('MSFT') plt.ylabel('AAPL') plt.title('Daily Prices in 2014'); R_msft = data['MSFT'].pct_change()[1:] R_aapl = data['AAPL'].pct_change()[1:] plt.scatter(R_msft, R_aapl) plt.xlabel('MSFT') plt.ylabel('AAPL') plt.title('Daily Returns in 2014'); """ Explanation: Scatter plot A scatter plot is useful for visualizing the relationship between two data sets. We use two data sets which have some sort of correspondence, such as the date on which the measurement was taken. Each point represents two corresponding values from the two data sets. However, we don't plot the date that the measurements were taken on. End of explanation """ plt.plot(data['MSFT']) plt.plot(data['AAPL']) plt.ylabel('Price') plt.legend(['MSFT', 'AAPL']); # Remove the first element because percent change from nothing to something is NaN R = data['MSFT'].pct_change()[1:] plt.plot(R) plt.ylabel('Return') plt.title('MSFT Returns'); """ Explanation: Line graph A line graph can be used when we want to track the development of the y value as the x value changes. For instance, when we are plotting the price of a stock, showing it as a line graph instead of just plotting the data points makes it easier to follow the price over time. This necessarily involves "connecting the dots" between the data points, which can mask out changes that happened between the time we took measurements. End of explanation """
NEAT-project/neat
policy/neat_policy_example.ipynb
bsd-3-clause
property1 = NEATProperty(('low_latency', True), precedence=NEATProperty.IMMUTABLE) property2 = NEATProperty(('remote_ip', '10.1.23.45'), precedence=NEATProperty.IMMUTABLE) property3 = NEATProperty(('MTU', {"start":1500, "end":9000}), precedence=NEATProperty.OPTIONAL) property4 = NEATProperty(('TCP', True)) # OPTIONAL is the default property precedence request = PropertyArray(property1,property2,property3,property4) print(request) """ Explanation: Application Request We consider an application that would like to open a new TCP connection using NEAT to a destination host d1 with the IP 10.1.23.45. Further, if possible, the MTU of this connection should be greater than 1500 bytes. Finally, the application specifies a low_latency profile. We define three properties to represent these application requirements and combine them into a NEATRequest: End of explanation """ properties_to_json(request) """ Explanation: Policy Manager/NEAT logic API Application requirements from the NEAT logic are passed to the Policy Manager using JSON. For the example above the JSON string could be: End of explanation """ cib = CIB('cib/example/') """ Explanation: Exemplary Setup Consider a host with three local interfaces en0, en1, ra0. Two of the interfaces, en0, en1, are wired while ra0 is a 3G interface. We populate an instance of the Characteristic Information Base (CIB) with some information about the host interfaces and the network. End of explanation """ cib.dump() """ Explanation: The currently known network chracteristics are stored as entries in the CIB, where each entry contains a set of properties associated with some interface: End of explanation """ profiles = PIB('pib/example/') pib = PIB('pib/example/') """ Explanation: PIB We create one repository for system profiles, and one for policies: End of explanation """ profile1 = NEATPolicy() profile1.match.add(NEATProperty(('low_latency', True))) profile1.properties.add(NEATProperty(('iw_wired', True)), NEATProperty(('interface_latency', (0,40)), precedence=NEATProperty.IMMUTABLE)) profiles.register(profile1) """ Explanation: For the current scenario, the low latency profile is defined as follows: End of explanation """ policy1 = NEATPolicy() policy1.match.add(NEATProperty(('remote_ip', '10.1.23.45'))) policy1.properties.add(NEATProperty(('capacity', (10000, 100000)), precedence=NEATProperty.IMMUTABLE), NEATProperty(('MTU', 9600))) print(policy1) """ Explanation: Next, we define two sample policies and add them to the Policy Information Base (PIB). A "bulk transfer" policy is configured which is triggered by a specific destination IP, which is known to be the address of backup NFS share: End of explanation """ policy2 = NEATPolicy(name='TCP options') policy2.match.insert(NEATProperty(('MTU', 9600)), NEATProperty(('is_wired', True))) policy2.properties.insert(NEATProperty(('TCP_window_scale', True))) pib.register(policy1) pib.register(policy2) pib.dump() """ Explanation: Another policy is in place to enable TCP window scaling on 10G links (if possible): End of explanation """ print(request.properties) profiles._lookup(request.properties, remove_matched=True, apply=True) print(request.properties) """ Explanation: Lookup Result Profile Lookup First, we apply the low_latency profile to the request properties. The low_latency property in the request is replaced by the corresponding profile properties: End of explanation """ cib.lookup(request) request.dump() """ Explanation: CIB Lookup Next a lookup in the CIB is performed. Our NEAT request yields three candidates: End of explanation """ pib.lookup_all(request.candidates) request.dump() """ Explanation: Each candidate is comprised of the union of the properties of a single CIB entry and the application request. Whenever the two sets intersect, the values of the corresponding properties are compared. If two properties match, the associated candidate property score is increased (e.g., [MTU|1500]+1.0 indicates a new score of 1.0). The score is decreased if there is a mismatch in the property values. PIB Lookup In the next step the policies are applied. The "Bulk transfer" policy is applied first as it posesses the smallest number of match entries. End of explanation """ request.candidates[0].dump() """ Explanation: Candidate 1 becomes: End of explanation """ request.candidates[1].dump() """ Explanation: Next we examine Candidate 2: End of explanation """ print(request.candidates[0].score) print(request.candidates[1].score) """ Explanation: Note that the score of the MTU property was reduced, as it did not match the requested property of the "Bulk transfer" policy. The "TCP options" policy is not applied as the candidate does not match the policy's MTU property. The third candidate was invalidated because the "Bulk transfer" policy contains an immutable property requiring a capacity of 10G, which candidate 3 cannot fulfil. Finally, we can obtain the total score of the properties associated with each candidate: End of explanation """ request.candidates[0].properties.json() request.candidates[1].properties.json() """ Explanation: The score indicates that candidate one (interface en0) is most suitable for the given application request. NEAT Logic The two candidates can now be passed on to the NEAT logic as JSON strings: End of explanation """
luofan18/deep-learning
image-classification/dlnd_image_classification.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) """ Explanation: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) """ Explanation: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. End of explanation """ def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function arrays = [] for x_ in x: array = np.array(x_) arrays.append(array) return np.stack(arrays, axis=0) / 256. """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize) """ Explanation: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. End of explanation """ def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function # class_num = np.array(x).max() class_num = 10 num = len(x) out = np.zeros((num, class_num)) for i in range(num): out[i, x[i]-1] = 1 return out """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode) """ Explanation: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) """ Explanation: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function # print ('image_shape') # print (image_shape) shape = (None, ) shape = shape + image_shape # print ('shape') # print (shape) inputs = tf.placeholder(tf.float32, shape=shape, name='x') # print ('inputs') # print (inputs) return inputs def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function shape = (None, ) shape = shape + (n_classes, ) return tf.placeholder(tf.float32, shape=shape, name='y') def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32, name='keep_prob') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) """ Explanation: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size. End of explanation """ def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides, maxpool=True): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function input_channel = x_tensor.get_shape().as_list()[-1] weights_size = conv_ksize + (input_channel,) + (conv_num_outputs,) conv_strides = (1,) + conv_strides + (1,) pool_ksize = (1,) + pool_ksize + (1,) pool_strides = (1,) + pool_strides + (1,) weights = tf.Variable(tf.random_normal(weights_size, stddev=0.01)) biases = tf.Variable(tf.zeros(conv_num_outputs)) out = tf.nn.conv2d(x_tensor, weights, conv_strides, padding='SAME') out = out + biases out = tf.nn.relu(out) if maxpool: out = tf.nn.max_pool(out, pool_ksize, pool_strides, padding='SAME') return out """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool) """ Explanation: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers. End of explanation """ def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function num, hight, width, channel = tuple(x_tensor.get_shape().as_list()) new_shape = (-1, hight * width * channel) # print ('new_shape') # print (new_shape) return tf.reshape(x_tensor, new_shape) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten) """ Explanation: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function num, dim = x_tensor.get_shape().as_list() weights = tf.Variable(tf.random_normal((dim, num_outputs), stddev=np.sqrt(2 / num_outputs))) biases = tf.Variable(tf.zeros(num_outputs)) return tf.nn.relu(tf.matmul(x_tensor, weights) + biases) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn) """ Explanation: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function num, dim = x_tensor.get_shape().as_list() weights = tf.Variable(tf.random_normal((dim, num_outputs), np.sqrt(2 / num_outputs))) biases = tf.Variable(tf.zeros(num_outputs)) return tf.matmul(x_tensor, weights) + biases """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output) """ Explanation: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this. End of explanation """ def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv_ksize3 = (3, 3) conv_ksize1 = (1, 1) conv_ksize5 = (5, 5) conv_ksize7 = (7, 7) conv_strides1 = (1, 1) conv_strides2 = (2, 2) pool_ksize = (2, 2) pool_strides = (2, 2) channels = [32,128,512,512] # L = 4 out = x # 6 layers # for i in range(int(L / 4)): out = conv2d_maxpool(out, channels[0], conv_ksize7, conv_strides1, pool_ksize, pool_strides, maxpool=True) out = conv2d_maxpool(out, channels[1], conv_ksize5, conv_strides1, pool_ksize, pool_strides, maxpool=True) out = conv2d_maxpool(out, channels[2], conv_ksize3, conv_strides1, pool_ksize, pool_strides, maxpool=True) # out = conv2d_maxpool(out, channels[3], conv_ksize5, conv_strides2, pool_ksize, pool_strides, maxpool=True) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) out = flatten(out) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) # by remove this fully connected layer can improve performance out = fully_conn(out, 256) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) out = tf.nn.dropout(out, keep_prob) out = output(out, 10) # TODO: return output return out """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) """ Explanation: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob. End of explanation """ def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function feed_dict = {keep_prob: keep_probability, x: feature_batch, y: label_batch} session.run(optimizer, feed_dict=feed_dict) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network) """ Explanation: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. End of explanation """ def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function # here will print loss, train_accuracy, and val_accuracy # I implemented the val_accuracy, please read them all, thanks # print train_accuracy to see overfit loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0}) train_accuracy = session.run(accuracy, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0}) batch = feature_batch.shape[0] num_valid = valid_features.shape[0] val_accuracy = 0 for i in range(0, num_valid, batch): end_i = i + batch if end_i > num_valid: end_i = num_valid batch_accuracy = session.run(accuracy, feed_dict={ x: valid_features[i:end_i], y: valid_labels[i:end_i], keep_prob: 1.0}) batch_accuracy *= (end_i - i) val_accuracy += batch_accuracy val_accuracy /= num_valid print ('loss is {}, train_accuracy is {}, val_accuracy is {}'.format(loss, train_accuracy, val_accuracy)) """ Explanation: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. End of explanation """ # TODO: Tune Parameters epochs = 10 batch_size = 128 keep_probability = 0.8 """ Explanation: Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) """ Explanation: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) """ Explanation: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() """ Explanation: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. End of explanation """