markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Standard Errors of the Standard Deviation Above we explored how the spread in our estimates of the mean changed with sample size. We can similarly explore how our estimates of the standard deviation of the population change as we vary our sample size.
# the label arguments get used when we create a legend plt.hist(std25, normed=True, alpha=0.75, histtype="stepfilled", label="n=25") plt.hist(std50, normed=True, alpha=0.75, histtype="stepfilled", label="n=50") plt.hist(std100, normed=True, alpha=0.75, histtype="stepfilled", label="n=100") plt.hist(std200, normed=True, alpha=0.75, histtype="stepfilled", label="n=200") plt.xlabel("Standard Deviation of Glucocorticoid Concentration") plt.ylabel("Density") plt.vlines(np.std(popn), 0, 9, linestyle='dashed', color='black',label="True Standard Deviation") #plt.legend() pass
Introduction-to-Simulation.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
You can show mathematically for normally distributed data, that the expected Standard Error of the Standard Deviation is approximately $$ \mbox{Standard Error of Standard Deviation} \approx \frac{\sigma}{\sqrt{2(n-1)}} $$ where $\sigma$ is the population standard deviation, and $n$ is the sample size. Let's compare that theoretical expectation to our simulated estimates.
x = [25,50,100,200] y = [ss25,ss50,ss100,ss200] plt.scatter(x,y, label="Simulation estimates") plt.xlabel("Sample size") plt.ylabel("Std Error of Std Dev") theory = [np.std(popn)/(np.sqrt(2.0*(i-1))) for i in range(10,250)] plt.plot(range(10,250), theory, color='red', label="Theoretical expectation") plt.xlim(0,300) plt.legend() pass
Introduction-to-Simulation.ipynb
Bio204-class/bio204-notebooks
cc0-1.0
TensorFlow Model Analysis An Example of a Key TFX Library This example colab notebook illustrates how TensorFlow Model Analysis (TFMA) can be used to investigate and visualize the characteristics of a dataset and the performance of a model. We'll use a model that we trained previously, and now you get to play with the results! The model we trained was for the Chicago Taxi Example, which uses the Taxi Trips dataset released by the City of Chicago. Note: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk. Read more about the dataset in Google BigQuery. Explore the full dataset in the BigQuery UI. Key Point: As a modeler and developer, think about how this data is used and the potential benefits and harm a model's predictions can cause. A model like this could reinforce societal biases and disparities. Is a feature relevant to the problem you want to solve or will it introduce bias? For more information, read about <a target='_blank' href='https://developers.google.com/machine-learning/fairness-overview/'>ML fairness</a>. Key Point: In order to understand TFMA and how it works with Apache Beam, you'll need to know a little bit about Apache Beam itself. The <a target='_blank' href='https://beam.apache.org/documentation/programming-guide/'>Beam Programming Guide</a> is a great place to start. The columns in the dataset are: <table> <tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr> <tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr> <tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr> <tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr> <tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr> <tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr> </table> Install Jupyter Extensions Note: If running TFMA in a local Jupyter notebook, then these Jupyter extensions must be installed in the environment before running Jupyter. bash jupyter nbextension enable --py widgetsnbextension jupyter nbextension install --py --symlink tensorflow_model_analysis jupyter nbextension enable --py tensorflow_model_analysis Setup First, we install the necessary packages, download data, import modules and set up paths. Install TensorFlow, TensorFlow Model Analysis (TFMA) and TensorFlow Data Validation (TFDV)
!pip install -q -U \ tensorflow==2.0.0 \ tfx==0.15.0rc0
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
Import packages We import necessary packages, including standard TFX component classes.
import csv import io import os import requests import tempfile import zipfile from google.protobuf import text_format import tensorflow as tf import tensorflow_data_validation as tfdv import tensorflow_model_analysis as tfma from tensorflow_metadata.proto.v0 import schema_pb2 tf.__version__ tfma.version.VERSION_STRING
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
Load The Files We'll download a zip file that has everything we need. That includes: Training and evaluation datasets Data schema Training results as EvalSavedModels Note: We are downloading with HTTPS from a Google Cloud server.
# Download the zip file from GCP and unzip it BASE_DIR = tempfile.mkdtemp() TFMA_DIR = os.path.join(BASE_DIR, 'eval_saved_models-2.0') DATA_DIR = os.path.join(TFMA_DIR, 'data') OUTPUT_DIR = os.path.join(TFMA_DIR, 'output') SCHEMA = os.path.join(TFMA_DIR, 'schema.pbtxt') response = requests.get('https://storage.googleapis.com/tfx-colab-datasets/eval_saved_models-2.0.zip', stream=True) zipfile.ZipFile(io.BytesIO(response.content)).extractall(BASE_DIR) print("Here's what we downloaded:") !cd {TFMA_DIR} && find .
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
Parse the Schema Among the things we downloaded was a schema for our data that was created by TensorFlow Data Validation. Let's parse that now so that we can use it with TFMA.
schema = schema_pb2.Schema() contents = tf.io.read_file(SCHEMA).numpy() schema = text_format.Parse(contents, schema) tfdv.display_schema(schema)
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
Use the Schema to Create TFRecords We need to give TFMA access to our dataset, so let's create a TFRecords file. We can use our schema to create it, since it gives us the correct type for each feature.
datafile = os.path.join(DATA_DIR, 'eval', 'data.csv') reader = csv.DictReader(open(datafile)) examples = [] for line in reader: example = tf.train.Example() for feature in schema.feature: key = feature.name if len(line[key]) > 0: if feature.type == schema_pb2.FLOAT: example.features.feature[key].float_list.value[:] = [float(line[key])] elif feature.type == schema_pb2.INT: example.features.feature[key].int64_list.value[:] = [int(line[key])] elif feature.type == schema_pb2.BYTES: example.features.feature[key].bytes_list.value[:] = [line[key].encode('utf8')] else: if feature.type == schema_pb2.FLOAT: example.features.feature[key].float_list.value[:] = [] elif feature.type == schema_pb2.INT: example.features.feature[key].int64_list.value[:] = [] elif feature.type == schema_pb2.BYTES: example.features.feature[key].bytes_list.value[:] = [] examples.append(example) TFRecord_file = os.path.join(BASE_DIR, 'train_data.rio') with tf.io.TFRecordWriter(TFRecord_file) as writer: for example in examples: writer.write(example.SerializeToString()) writer.flush() writer.close() !ls {TFRecord_file}
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
Run TFMA and Render Metrics Now we're ready to create a function that we'll use to run TFMA and render metrics. It requires an EvalSavedModel, a list of SliceSpecs, and an index into the SliceSpec list. It will create an EvalResult using tfma.run_model_analysis, and use it to create a SlicingMetricsViewer using tfma.view.render_slicing_metrics, which will render a visualization of our dataset using the slice we created.
def run_and_render(eval_model=None, slice_list=None, slice_idx=0): """Runs the model analysis and renders the slicing metrics Args: eval_model: An instance of tf.saved_model saved with evaluation data slice_list: A list of tfma.slicer.SingleSliceSpec giving the slices slice_idx: An integer index into slice_list specifying the slice to use Returns: A SlicingMetricsViewer object if in Jupyter notebook; None if in Colab. """ eval_result = tfma.run_model_analysis(eval_shared_model=eval_model, data_location=TFRecord_file, file_format='tfrecords', slice_spec=slice_list, output_path='sample_data', extractors=None) return tfma.view.render_slicing_metrics(eval_result, slicing_spec=slice_list[slice_idx] if slice_list else None)
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
Slicing and Dicing We previously trained a model, and now we've loaded the results. Let's take a look at our visualizations, starting with using TFMA to slice along particular features. But first we need to read in the EvalSavedModel from one of our previous training runs. To define the slice you want to visualize you create a tfma.slicer.SingleSliceSpec To use tfma.view.render_slicing_metrics you can either use the name of the column (by setting slicing_column) or provide a tfma.slicer.SingleSliceSpec (by setting slicing_spec) If neither is provided, the overview will be displayed Plots are interactive: Click and drag to pan Scroll to zoom Right click to reset the view Simply hover over the desired data point to see more details. Select from four different types of plots using the selections at the bottom. For example, we'll be setting slicing_column to look at the trip_start_hour feature in our SliceSpec.
# Load the TFMA results for the first training run # This will take a minute eval_model_base_dir_0 = os.path.join(TFMA_DIR, 'run_0', 'eval_model_dir') eval_model_dir_0 = os.path.join(eval_model_base_dir_0, max(os.listdir(eval_model_base_dir_0))) eval_shared_model_0 = tfma.default_eval_shared_model( eval_saved_model_path=eval_model_dir_0) # Slice our data by the trip_start_hour feature slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_hour'])] run_and_render(eval_model=eval_shared_model_0, slice_list=slices, slice_idx=0)
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
Slices Overview The default visualization is the Slices Overview when the number of slices is small. It shows the values of metrics for each slice. Since we've selected trip_start_hour above, it's showing us metrics like accuracy and AUC for each hour, which allows us to look for issues that are specific to some hours and not others. In the visualization above: Try sorting the feature column, which is our trip_start_hours feature, by clicking on the column header Try sorting by precision, and notice that the precision for some of the hours with examples is 0, which may indicate a problem The chart also allows us to select and display different metrics in our slices. Try selecting different metrics from the "Show" menu Try selecting recall in the "Show" menu, and notice that the recall for some of the hours with examples is 0, which may indicate a problem It is also possible to set a threshold to filter out slices with smaller numbers of examples, or "weights". You can type a minimum number of examples, or use the slider. Metrics Histogram This view also supports a Metrics Histogram as an alternative visualization, which is also the default view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Columns can be sorted by clicking on the column header. Slices with small weights can be filtered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can also be used to remove outliers in the visualization and the metrics tables. Click the gear icon to switch to a logarithmic scale instead of a linear scale. Try selecting "Metrics Histogram" in the Visualization menu More Slices Let's create a whole list of SliceSpecs, which will allow us to select any of the slices in the list. We'll select the trip_start_day slice (days of the week) by setting the slice_idx to 1. Try changing the slice_idx to 0 or 2 and running again to examine different slices.
slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_hour']), tfma.slicer.SingleSliceSpec(columns=['trip_start_day']), tfma.slicer.SingleSliceSpec(columns=['trip_start_month'])] run_and_render(eval_model=eval_shared_model_0, slice_list=slices, slice_idx=0)
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
You can create feature crosses to analyze combinations of features. Let's create a SliceSpec to look at a cross of trip_start_day and trip_start_hour:
slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_day', 'trip_start_hour'])] run_and_render(eval_shared_model_0, slices, 0)
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
Crossing the two columns creates a lot of combinations! Let's narrow down our cross to only look at trips that start at noon. Then let's select accuracy from the visualization:
slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])] run_and_render(eval_shared_model_0, slices, 0)
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
Tracking Model Performance Over Time Your training dataset will be used for training your model, and will hopefully be representative of your test dataset and the data that will be sent to your model in production. However, while the data in inference requests may remain the same as your training data, in many cases it will start to change enough so that the performance of your model will change. That means that you need to monitor and measure your model's performance on an ongoing basis, so that you can be aware of and react to changes. Let's take a look at how TFMA can help. Measure Performance For New Data We downloaded the results of three different training runs above, so let's load them now:
def get_eval_result(base_dir, run_name, data_loc, slice_spec): eval_model_base_dir = os.path.join(base_dir, run_name, "eval_model_dir") versions = os.listdir(eval_model_base_dir) eval_model_dir = os.path.join(eval_model_base_dir, max(versions)) output_dir = os.path.join(base_dir, "output", run_name) eval_shared_model = tfma.default_eval_shared_model(eval_saved_model_path=eval_model_dir) return tfma.run_model_analysis(eval_shared_model=eval_shared_model, data_location=data_loc, file_format='tfrecords', slice_spec=slice_spec, output_path=output_dir, extractors=None) slices = [tfma.slicer.SingleSliceSpec()] result_ts0 = get_eval_result(TFMA_DIR, 'run_0', TFRecord_file, slices) result_ts1 = get_eval_result(TFMA_DIR, 'run_1', TFRecord_file, slices) result_ts2 = get_eval_result(TFMA_DIR, 'run_2', TFRecord_file, slices)
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
Next, let's use TFMA to see how these runs compare using render_time_series. How does it look today? First, we'll imagine that we've trained and deployed our model yesterday, and now we want to see how it's doing on the new data coming in today. We can specify particular slices to look at. Let's compare our training runs for trips that started at noon. Note: * The visualization will start by displaying accuracy. Add AUC and average loss by using the "Add metric series" menu. * Hover over the curves to see the values. * In the metric series charts the X axis is the model ID number of the model run that you're examining. The numbers themselves are not meaningful.
output_dirs = [os.path.join(TFMA_DIR, "output", run_name) for run_name in ("run_0", "run_1", "run_2")] eval_results_from_disk = tfma.load_eval_results( output_dirs[:2], tfma.constants.MODEL_CENTRIC_MODE) tfma.view.render_time_series(eval_results_from_disk, slices[0])
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
Now we'll imagine that another day has passed and we want to see how it's doing on the new data coming in today, compared to the previous two days. Again add AUC and average loss by using the "Add metric series" menu:
eval_results_from_disk = tfma.load_eval_results( output_dirs, tfma.constants.MODEL_CENTRIC_MODE) tfma.view.render_time_series(eval_results_from_disk, slices[0])
tfx_labs/Lab_6_Model_Analysis.ipynb
tensorflow/workshops
apache-2.0
Several Useful Functions These are functions that I reuse often to encode the feature vector (FV).
# These are several handy functions that I use in my class: # Encode a text field to dummy variables def encode_text_dummy(df,name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = "{}-{}".format(name,x) df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Encode a text field to a single index value def encode_text_index(df,name): le = preprocessing.LabelEncoder() df[name] = le.fit_transform(df[name]) return le.classes_ # Encode a numeric field to Z-Scores def encode_numeric_zscore(df,name,mean=None,sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name]-mean)/sd # Encode a numeric field to fill missing values with the median. def missing_median(df, name): med = df[name].median() df[name] = df[name].fillna(med) # Convert a dataframe to x/y suitable for training. def to_xy(df,target): result = [] for x in df.columns: if x != target: result.append(x) return df.as_matrix(result),df[target]
tf_kdd99.ipynb
jbliss1234/ML
apache-2.0
Read in Raw KDD-99 Dataset
# This file is a CSV, just no CSV extension or headers # Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html df = pd.read_csv("/Users/jeff/Downloads/data/kddcup.data_10_percent", header=None) print("Read {} rows.".format(len(df))) # df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values) # The CSV file has no column heads, so add them df.columns = [ 'duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate', 'outcome' ] # display 5 rows df[0:5]
tf_kdd99.ipynb
jbliss1234/ML
apache-2.0
Encode the feature vector Encode every row in the database. This is not instant!
# Now encode the feature vector encode_numeric_zscore(df, 'duration') encode_text_dummy(df, 'protocol_type') encode_text_dummy(df, 'service') encode_text_dummy(df, 'flag') encode_numeric_zscore(df, 'src_bytes') encode_numeric_zscore(df, 'dst_bytes') encode_text_dummy(df, 'land') encode_numeric_zscore(df, 'wrong_fragment') encode_numeric_zscore(df, 'urgent') encode_numeric_zscore(df, 'hot') encode_numeric_zscore(df, 'num_failed_logins') encode_text_dummy(df, 'logged_in') encode_numeric_zscore(df, 'num_compromised') encode_numeric_zscore(df, 'root_shell') encode_numeric_zscore(df, 'su_attempted') encode_numeric_zscore(df, 'num_root') encode_numeric_zscore(df, 'num_file_creations') encode_numeric_zscore(df, 'num_shells') encode_numeric_zscore(df, 'num_access_files') encode_numeric_zscore(df, 'num_outbound_cmds') encode_text_dummy(df, 'is_host_login') encode_text_dummy(df, 'is_guest_login') encode_numeric_zscore(df, 'count') encode_numeric_zscore(df, 'srv_count') encode_numeric_zscore(df, 'serror_rate') encode_numeric_zscore(df, 'srv_serror_rate') encode_numeric_zscore(df, 'rerror_rate') encode_numeric_zscore(df, 'srv_rerror_rate') encode_numeric_zscore(df, 'same_srv_rate') encode_numeric_zscore(df, 'diff_srv_rate') encode_numeric_zscore(df, 'srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_count') encode_numeric_zscore(df, 'dst_host_srv_count') encode_numeric_zscore(df, 'dst_host_same_srv_rate') encode_numeric_zscore(df, 'dst_host_diff_srv_rate') encode_numeric_zscore(df, 'dst_host_same_src_port_rate') encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate') encode_numeric_zscore(df, 'dst_host_serror_rate') encode_numeric_zscore(df, 'dst_host_srv_serror_rate') encode_numeric_zscore(df, 'dst_host_rerror_rate') encode_numeric_zscore(df, 'dst_host_srv_rerror_rate') outcomes = encode_text_index(df, 'outcome') num_classes = len(outcomes) # display 5 rows df.dropna(inplace=True,axis=1) df[0:5] # This is the numeric feature vector, as it goes to the neural net
tf_kdd99.ipynb
jbliss1234/ML
apache-2.0
Train the Neural Network
# Break into X (predictors) & y (prediction) x, y = to_xy(df,'outcome') # Create a test/train split. 25% test # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create a deep neural network with 3 hidden layers of 10, 20, 10 classifier = skflow.TensorFlowDNNClassifier(hidden_units=[10, 20, 10], n_classes=num_classes, steps=500) # Early stopping early_stop = skflow.monitors.ValidationMonitor(x_test, y_test, early_stopping_rounds=200, n_classes=num_classes, print_steps=50) # Fit/train neural network classifier.fit(x, y, early_stop) # Measure accuracy pred = classifier.predict(x_test) score = metrics.accuracy_score(y_test, pred) print("Validation score: {}".format(score))
tf_kdd99.ipynb
jbliss1234/ML
apache-2.0
nltk Si vous utilisez la librairie nltk pour la première fois, il est nécessaire d'utiliser la commande suivante. Cette commande permet de télécharger de nombreux corpus de texte, mais également des informations grammaticales sur différentes langues. Information notamment nécessaire à l'étape de racinisation.
# nltk.download("all")
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Les données Dans le dossier Cdiscount/data de ce répértoire vous trouverez les fichiers suivants : cdiscount_test.csv.zip: Fichier d'apprentissage constitué de 1.000.000 de lignes cdisount_test: Fichier test constitué de 50.000 lignes ### Read & Split Dataset On définit une fonction permettant de lire le fichier d'apprentissage et de créer deux DataFrame Pandas, un pour l'apprentissage, l'autre pour la validation. La fonction créée un DataFrame en lisant entièrement le fichier. Puis elle scinde ce DataFrame en deux grâce à la fonction dédiée de sklearn.
def split_dataset(input_path, nb_line, tauxValid): data_all = pd.read_csv(input_path,sep=",", nrows=nb_line) data_all = data_all.fillna("") data_train, data_valid = scv.train_test_split(data_all, test_size = tauxValid) time_end = time.time() return data_train, data_valid
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Bien que déjà réduit par rapport au fichier original du concours, contenant plus de 15M de lignes, le fichier cdiscount_test.csv.zip, contenant 1M de lignes est encore volumineux. Nous allons charger en mémoire qu'une partie de ce fichier grace à l'argument nb_line afin d'éviter des temps de calcul trop couteux. Nous allons extraire 5% de ces 1M de lignes commes échantillons de validation.
input_path = "data/cdiscount_train.csv.zip" nb_line=100000 # part totale extraite du fichier initial ici déjà réduit tauxValid = 0.05 data_train, data_valid = split_dataset(input_path, nb_line, tauxValid) # Cette ligne permet de visualiser les 5 premières lignes de la DataFrame N_train = data_train.shape[0] N_valid = data_valid.shape[0] print("Train set : %d elements, Validation set : %d elements" %(N_train, N_valid))
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
La commande suivante permet d'afficher les premières lignes du fichiers. Vous pouvez observer que chaque produit possède 3 niveaux de Catégories, qui correspondent au différents niveaux de l'arborescence que vous retrouverez sur le site. Il y a 44 catégories de niveau 1, 428 de niveau 2 et 3170 de niveau 3. Dans ce TP, nous nous interesserons uniquement à classer les produits dans la catégorie de niveau 1.
data_train.head(5)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
La commande suivante permet d'afficher un exemple de produits pour chaque Catégorie de niveau 1.
data_train.groupby("Categorie1").first()[["Description","Libelle","Marque"]]
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Distribution des classes
#Count occurence of each Categorie data_count = data_train["Categorie1"].value_counts() #Rename index to add percentage new_index = [k+ ": %.2f%%" %(v*100/N_train) for k,v in data_count.iteritems()] data_count.index = new_index fig=plt.figure(figsize= (10,10)) ax = fig.add_subplot(1,1,1) data_count.plot.barh(logx = False) plt.show()
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Q Que peut-on dire sur la distribution de ces classes? Sauvegarde des données On sauvegarde dans des csv les fichiers train et validation afin que ces mêmes fichiers soit ré-utilisés plus tard dans d'autre calepin
data_valid.to_csv("data/cdiscount_valid.csv", index=False) data_train.to_csv("data/cdiscount_train_subset.csv", index=False)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Nettoyage des données Afin de limiter la dimension de l'espace des variables ou features (i.e les mots présents dans le document), tout en conservant les informations essentielles, il est nécessaire de nettoyer les données en appliquant plusieurs étapes: Chaque mot est écrit en minuscule. Les termes numériques, de ponctuation et autres symboles sont supprimés. 155 mots-courants, et donc non informatifs, de la langue française sont supprimés (STOPWORDS). Ex: le, la, du, alors, etc... Chaque mot est "racinisé", via la fonction STEMMER.stem de la librairie nltk. La racinisation transforme un mot en son radical ou sa racine. Par exemple, les mots: cheval, chevaux, chevalier, chevalerie, chevaucher sont tous remplacés par "cheva". Exemple Observons dans un premier temps l'effet de ces différentes étapes sur un exemple. Ligne Originale
i = 0 description = data_train.Description.values[i] print("Original Description : " + description)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Suppression des posibles balises HTML dans la description Les descriptions produits étant parfois extraites d'autres sites commerçant, des balises HTML peuvent être incluts dans la description. La librairie 'BeautifulSoup' permet de supprimer ces balises
from bs4 import BeautifulSoup #Nettoyage d'HTML txt = BeautifulSoup(description,"html.parser",from_encoding='utf-8').get_text() print(txt)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Conversion du texte en minuscule Certaines mots peuvent être écrits en majuscule dans les descriptions textes, cela à pour conséquence de dupliquer le nombre de features et une perte d'information.
txt = txt.lower() print(txt)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Remplacement de caractères spéciaux Certains caractères spéciaux sont supprimés comme par exemple : \u2026: … \u00a0: NO-BREAK SPACE Cette liste est non exhaustive et peut être etayée en fonction du jeu de donées étudié, de l'objectif souhaité ou encore du résultat de l'étude explorative.
txt = txt.replace(u'\u2026','.') txt = txt.replace(u'\u00a0',' ') print(txt)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Suppression des accents
txt = unicodedata.normalize('NFD', txt).encode('ascii', 'ignore').decode("utf-8") print(txt)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Supprime les caractères qui ne sont ne sont pas des lettres minuscules Une fois ces premières étapes passées, on supprime tous les caractères qui sont pas des lettres minusculres, c'est à dire les signes de ponctuation, les caractères numériques etc...
txt = re.sub('[^a-z_]', ' ', txt) print(txt)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Remplace la description par une liste de mots (tokens), supprime les mots de moins de 2 lettres ainsi que les stopwords On va supprimer maintenant tous les mots considérés comme "non-informatif". Par exemple : "le", "la", "de" ... Des listes contenants ces mots sont proposés dans des libraires tels que nltk ou encore lucène.
## listes de mots à supprimer dans la description des produits ## Depuis NLTK nltk_stopwords = nltk.corpus.stopwords.words('french') ## Depuis Un fichier externe. lucene_stopwords =open("data/lucene_stopwords.txt","r").read().split(",") #En local ## Union des deux fichiers de stopwords stopwords = list(set(nltk_stopwords).union(set(lucene_stopwords))) stopwords[:10]
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
On applique également la suppression des accents à cette liste
stopwords = [unicodedata.normalize('NFD', sw).encode('ascii', 'ignore').decode("utf-8") for sw in stopwords] stopwords[:10]
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Enfin on crée des tokens, liste de mots dans la description produit, en supprimant les éléments de notre description produit qui sont présent dans la liste de stopword.
tokens = [w for w in txt.split() if (len(w)>2) and (w not in stopwords)] remove_words = [w for w in txt.split() if (len(w)<2) or (w in stopwords)] print(tokens) print(remove_words)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Racinisation (Stem) chaque tokens Pour chaque mot de notre liste de token, on va ramener ce mot à sa racine au sens de l'algorithme de Snowball présent dans la librairie nltk. Cette liste de mots néttoyé et racinisé va constitué les features de cette description produits.
## Fonction de setmming de stemming permettant la racinisation stemmer=nltk.stem.SnowballStemmer('french') tokens_stem = [stemmer.stem(token) for token in tokens] print(tokens_stem)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Fonction de nettoyage de texte On définit une fonction clean-txt qui prend en entrée un texte de description produit et qui retourne le texte nettoyé en appliquant successivement les étapes présentés précedemment. On définit également une fonction clean_marque qui contient signifcativement moins d'étape de nettoyage.
# Fonction clean générale def clean_txt(txt): ### remove html stuff txt = BeautifulSoup(txt,"html.parser",from_encoding='utf-8').get_text() ### lower case txt = txt.lower() ### special escaping character '...' txt = txt.replace(u'\u2026','.') txt = txt.replace(u'\u00a0',' ') ### remove accent btw txt = unicodedata.normalize('NFD', txt).encode('ascii', 'ignore').decode("utf-8") ###txt = unidecode(txt) ### remove non alphanumeric char txt = re.sub('[^a-z_]', ' ', txt) ### remove french stop words tokens = [w for w in txt.split() if (len(w)>2) and (w not in stopwords)] ### french stemming tokens_stem = [stemmer.stem(token) for token in tokens] ### tokens = stemmer.stemWords(tokens) return ' '.join(tokens), " ".join(tokens_stem) def clean_marque(txt): txt = re.sub('[^a-zA-Z0-9]', '_', txt).lower() return txt
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Applique le nettoyage sur toutes les lignes de la DataFrame et créé deux nouvelles Dataframe (avant et sans l'étape de racinisation).
# fonction de nettoyage du fichier(stemming et liste de mots à supprimer) def clean_df(input_data, column_names= ['Description', 'Libelle', 'Marque']): nb_line = input_data.shape[0] print("Start Clean %d lines" %nb_line) # Cleaning start for each columns time_start = time.time() clean_list=[] clean_stem_list=[] for column_name in column_names: column = input_data[column_name].values if column_name == "Marque": array_clean = np.array(list(map(clean_marque,column))) clean_list.append(array_clean) clean_stem_list.append(array_clean) else: A = np.array(list(map(clean_txt,column))) array_clean = A[:,0] array_clean_stem = A[:,1] clean_list.append(array_clean) clean_stem_list.append(array_clean_stem) time_end = time.time() print("Cleaning time: %d secondes"%(time_end-time_start)) #Convert list to DataFrame array_clean = np.array(clean_list).T data_clean = pd.DataFrame(array_clean, columns = column_names) array_clean_stem = np.array(clean_stem_list).T data_clean_stem = pd.DataFrame(array_clean_stem, columns = column_names) return data_clean, data_clean_stem
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Nettoyage des DataFrames
# Take approximately 2 minutes fors 100.000 rows warnings.filterwarnings("ignore") data_valid_clean, data_valid_clean_stem = clean_df(data_valid) warnings.filterwarnings("ignore") data_train_clean, data_train_clean_stem = clean_df(data_train)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Affiche les 5 premières lignes de la DataFrame d'apprentissage après nettoyage.
data_train_clean.head(5) data_train_clean_stem.head(5)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Taille du dictionnaire de mots pour le dataset avant et après la racinisation.
concatenate_text = " ".join(data_train["Description"].values) list_of_word = concatenate_text.split(" ") N = len(set(list_of_word)) print(N) concatenate_text = " ".join(data_train_clean["Description"].values) list_of_word = concatenate_text.split(" ") N = len(set(list_of_word)) print(N) concatenate_text = " ".join(data_train_clean_stem["Description"].values) list_of_word_stem = concatenate_text.split(" ") N = len(set(list_of_word_stem)) print(N)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Wordcloud Les représentations Wordcloud permettent des représentations de l'ensemble des mots d'un corpus de documents. Dans cette représentation plus un mot apparait de manière fréquent dans le corpus, plus sa taille sera grande dans la représentation du corpus.
from wordcloud import WordCloud A=WordCloud(background_color="black") A.generate_from_text?
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Wordcloud de l'ensemble des description à l'état brut.
all_descr = " ".join(data_valid.Description.values) wordcloud_word = WordCloud(background_color="black", collocations=False).generate_from_text(all_descr) plt.figure(figsize=(10,10)) plt.imshow(wordcloud_word,cmap=plt.cm.Paired) plt.axis("off") plt.show()
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Wordcloud après racinisation et nettoyage
all_descr_clean_stem = " ".join(data_valid_clean_stem.Description.values) wordcloud_word = WordCloud(background_color="black", collocations=False).generate_from_text(all_descr_clean_stem) plt.figure(figsize=(10,10)) plt.imshow(wordcloud_word,cmap=plt.cm.Paired) plt.axis("off") plt.show()
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Vous pouvez observer que les mots "voir et "present" sont les plus représentés. Cela est du au fait que la pluspart des descriptions se terminent par "Voir la présentation". C'est deux mots ne sont donc pas informatif car présent dans beaucoup de catégorie différente. C'est une bon exemple de stopword propre à un problème spécifique. Exercice Ajouter les mots voiret présentationà la liste des stopwords plus hauts et refaites tourner le nettoyage. Exercice Générer les wordcloud par catégorie pour 3 catégories de votre choix. Sauvegarde des jeux de données nettoyés dans des fichiers csv.
data_valid_clean.to_csv("data/cdiscount_valid_clean.csv", index=False) data_train_clean.to_csv("data/cdiscount_train_clean.csv", index=False) data_valid_clean_stem.to_csv("data/cdiscount_valid_clean_stem.csv", index=False) data_train_clean_stem.to_csv("data/cdiscount_train_clean_stem.csv", index=False)
Cdiscount/Part1-1-AIF-PythonNltk-Explore&CleanText-Cdiscount.ipynb
wikistat/Ateliers-Big-Data
mit
Compilers: Numba and Cython Requirement To get Cython working, Winpython 3.7+ users should install "Microsoft Visual C++ Build Tools 2017" (visualcppbuildtools_full.exe, a 4 Go installation) at https://beta.visualstudio.com/download-visual-studio-vs/ To get Numba working, not-windows10 users may have to install "Microsoft Visual C++ Redistributable pour Visual Studio 2017" (vc_redist) at https://beta.visualstudio.com/download-visual-studio-vs/ Thanks to recent progress, Visual Studio 2017/2018/2019 are cross-compatible now Compiler toolchains Numba (a JIT Compiler)
# checking Numba JIT toolchain import numpy as np image = np.zeros((1024, 1536), dtype = np.uint8) #from pylab import imshow, show import matplotlib.pyplot as plt from timeit import default_timer as timer from numba import jit @jit def create_fractal(min_x, max_x, min_y, max_y, image, iters , mandelx): height = image.shape[0] width = image.shape[1] pixel_size_x = (max_x - min_x) / width pixel_size_y = (max_y - min_y) / height for x in range(width): real = min_x + x * pixel_size_x for y in range(height): imag = min_y + y * pixel_size_y color = mandelx(real, imag, iters) image[y, x] = color @jit def mandel(x, y, max_iters): c = complex(x, y) z = 0.0j for i in range(max_iters): z = z*z + c if (z.real*z.real + z.imag*z.imag) >= 4: return i return max_iters # Numba speed start = timer() create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20 , mandel) dt = timer() - start fig = plt.figure() print ("Mandelbrot created by numba in %f s" % dt) plt.imshow(image) plt.show()
docs/Winpython_checker.ipynb
stonebig/winpython_afterdoc
mit
Cython (a compiler for writing C extensions for the Python language) WinPython 3.5 and 3.6 users may not have mingwpy available, and so need "VisualStudio C++ Community Edition 2015" https://www.visualstudio.com/downloads/download-visual-studio-vs#d-visual-c
# Cython + Mingwpy compiler toolchain test %load_ext Cython %%cython -a # with %%cython -a , full C-speed lines are shown in white, slowest python-speed lines are shown in dark yellow lines # ==> put your cython rewrite effort on dark yellow lines def create_fractal_cython(min_x, max_x, min_y, max_y, image, iters , mandelx): height = image.shape[0] width = image.shape[1] pixel_size_x = (max_x - min_x) / width pixel_size_y = (max_y - min_y) / height for x in range(width): real = min_x + x * pixel_size_x for y in range(height): imag = min_y + y * pixel_size_y color = mandelx(real, imag, iters) image[y, x] = color def mandel_cython(x, y, max_iters): cdef int i cdef double cx, cy , zx, zy cx , cy = x, y zx , zy =0 ,0 for i in range(max_iters): zx , zy = zx*zx - zy*zy + cx , zx*zy*2 + cy if (zx*zx + zy*zy) >= 4: return i return max_iters #Cython speed start = timer() create_fractal_cython(-2.0, 1.0, -1.0, 1.0, image, 20 , mandel_cython) dt = timer() - start fig = plt.figure() print ("Mandelbrot created by cython in %f s" % dt) plt.imshow(image)
docs/Winpython_checker.ipynb
stonebig/winpython_afterdoc
mit
Graphics: Matplotlib, Pandas, Seaborn, Holoviews, Bokeh, bqplot, ipyleaflet, plotnine
# Matplotlib 3.4.1 # for more examples, see: http://matplotlib.org/gallery.html from mpl_toolkits.mplot3d import axes3d import matplotlib.pyplot as plt from matplotlib import cm ax = plt.figure().add_subplot(projection='3d') X, Y, Z = axes3d.get_test_data(0.05) # Plot the 3D surface ax.plot_surface(X, Y, Z, rstride=8, cstride=8, alpha=0.3) # Plot projections of the contours for each dimension. By choosing offsets # that match the appropriate axes limits, the projected contours will sit on # the 'walls' of the graph cset = ax.contourf(X, Y, Z, zdir='z', offset=-100, cmap=cm.coolwarm) cset = ax.contourf(X, Y, Z, zdir='x', offset=-40, cmap=cm.coolwarm) cset = ax.contourf(X, Y, Z, zdir='y', offset=40, cmap=cm.coolwarm) ax.set_xlim(-40, 40) ax.set_ylim(-40, 40) ax.set_zlim(-100, 100) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') plt.show() # Seaborn # for more examples, see http://stanford.edu/~mwaskom/software/seaborn/examples/index.html import seaborn as sns sns.set() df = sns.load_dataset("iris") sns.pairplot(df, hue="species", height=1.5) # altair-example import altair as alt alt.Chart(df).mark_bar().encode( x=alt.X('sepal_length', bin=alt.Bin(maxbins=50)), y='count(*):Q', color='species:N', #column='species', ).interactive() # temporary warning removal import warnings import matplotlib as mpl warnings.filterwarnings("ignore", category=mpl.cbook.MatplotlibDeprecationWarning) # Holoviews # for more example, see http://holoviews.org/Tutorials/index.html import numpy as np import holoviews as hv hv.extension('matplotlib') dots = np.linspace(-0.45, 0.45, 11) fractal = hv.Image(image) layouts = {y: (fractal * hv.Points(fractal.sample([(i,y) for i in dots])) + fractal.sample(y=y) ) for y in np.linspace(0, 0.45,11)} hv.HoloMap(layouts, kdims=['Y']).collate().cols(2) # Bokeh 0.12.5 import numpy as np from six.moves import zip from bokeh.plotting import figure, show, output_notebook N = 4000 x = np.random.random(size=N) * 100 y = np.random.random(size=N) * 100 radii = np.random.random(size=N) * 1.5 colors = ["#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)] output_notebook() TOOLS="hover,crosshair,pan,wheel_zoom,box_zoom,reset,tap,save,box_select,poly_select,lasso_select" p = figure(tools=TOOLS) p.scatter(x,y, radius=radii, fill_color=colors, fill_alpha=0.6, line_color=None) show(p) # Datashader (holoviews+Bokeh) import datashader as ds import numpy as np import holoviews as hv from holoviews import opts from holoviews.operation.datashader import datashade, shade, dynspread, spread, rasterize from holoviews.operation import decimate hv.extension('bokeh') decimate.max_samples=1000 dynspread.max_px=20 dynspread.threshold=0.5 def random_walk(n, f=5000): """Random walk in a 2D space, smoothed with a filter of length f""" xs = np.convolve(np.random.normal(0, 0.1, size=n), np.ones(f)/f).cumsum() ys = np.convolve(np.random.normal(0, 0.1, size=n), np.ones(f)/f).cumsum() xs += 0.1*np.sin(0.1*np.array(range(n-1+f))) # add wobble on x axis xs += np.random.normal(0, 0.005, size=n-1+f) # add measurement noise ys += np.random.normal(0, 0.005, size=n-1+f) return np.column_stack([xs, ys]) def random_cov(): """Random covariance for use in generating 2D Gaussian distributions""" A = np.random.randn(2,2) return np.dot(A, A.T) np.random.seed(1) points = hv.Points(np.random.multivariate_normal((0,0), [[0.1, 0.1], [0.1, 1.0]], (50000,)),label="Points") paths = hv.Path([0.15*random_walk(10000) for i in range(10)], kdims=["u","v"], label="Paths") decimate(points) + rasterize(points) + rasterize(paths) ropts = dict(colorbar=True, tools=["hover"], width=350) rasterize( points).opts(cmap="kbc_r", cnorm="linear").relabel('rasterize()').opts(**ropts).hist() + \ dynspread(datashade( points, cmap="kbc_r", cnorm="linear").relabel("datashade()")) #bqplot from IPython.display import display from bqplot import (Figure, Map, Mercator, Orthographic, ColorScale, ColorAxis, AlbersUSA, topo_load, Tooltip) def_tt = Tooltip(fields=['id', 'name']) map_mark = Map(scales={'projection': Mercator()}, tooltip=def_tt) map_mark.interactions = {'click': 'select', 'hover': 'tooltip'} fig = Figure(marks=[map_mark], title='Interactions Example') display(fig) # ipyleaflet (javascript library usage) from ipyleaflet import ( Map, Marker, TileLayer, ImageOverlay, Polyline, Polygon, Rectangle, Circle, CircleMarker, GeoJSON, DrawControl ) from traitlets import link center = [34.6252978589571, -77.34580993652344] m = Map(center=[34.6252978589571, -77.34580993652344], zoom=10) dc = DrawControl() def handle_draw(self, action, geo_json): print(action) print(geo_json) m m dc.on_draw(handle_draw) m.add_control(dc) %matplotlib widget # Testing matplotlib interactions with a simple plot import matplotlib.pyplot as plt import numpy as np # warning ; you need to launch a second time %matplotlib widget, if after a %matplotlib inline %matplotlib widget fig = plt.figure() #plt.figure(1) plt.plot(np.sin(np.linspace(0, 20, 100))) plt.show() # plotnine: giving a taste of ggplot of R langage (formerly we were using ggpy) from plotnine import ggplot, aes, geom_blank, geom_point, stat_smooth, facet_wrap, theme_bw from plotnine.data import mtcars ggplot(mtcars, aes(x='hp', y='wt', color='mpg')) + geom_point() +\ facet_wrap("~cyl") + theme_bw()
docs/Winpython_checker.ipynb
stonebig/winpython_afterdoc
mit
Ipython Notebook: Interactivity & other
import IPython;IPython.__version__ # Audio Example : https://github.com/ipython/ipywidgets/blob/master/examples/Beat%20Frequencies.ipynb %matplotlib inline import matplotlib.pyplot as plt import numpy as np from ipywidgets import interactive from IPython.display import Audio, display def beat_freq(f1=220.0, f2=224.0): max_time = 3 rate = 8000 times = np.linspace(0,max_time,rate*max_time) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) print(f1, f2, abs(f1-f2)) display(Audio(data=signal, rate=rate)) try: plt.plot(signal); #plt.plot(v.result); except: pass return signal v = interactive(beat_freq, f1=(200.0,300.0), f2=(200.0,300.0)) display(v) # Networks graph Example : https://github.com/ipython/ipywidgets/blob/master/examples/Exploring%20Graphs.ipynb %matplotlib inline from ipywidgets import interact import matplotlib.pyplot as plt import networkx as nx # wrap a few graph generation functions so they have the same signature def random_lobster(n, m, k, p): return nx.random_lobster(n, p, p / m) def powerlaw_cluster(n, m, k, p): return nx.powerlaw_cluster_graph(n, m, p) def erdos_renyi(n, m, k, p): return nx.erdos_renyi_graph(n, p) def newman_watts_strogatz(n, m, k, p): return nx.newman_watts_strogatz_graph(n, k, p) @interact(n=(2,30), m=(1,10), k=(1,10), p=(0.0, 1.0, 0.001), generator={'lobster': random_lobster, 'power law': powerlaw_cluster, 'Newman-Watts-Strogatz': newman_watts_strogatz, u'Erdős-Rényi': erdos_renyi, }) def plot_random_graph(n, m, k, p, generator): g = generator(n, m, k, p) nx.draw(g) plt.title(generator.__name__) plt.show()
docs/Winpython_checker.ipynb
stonebig/winpython_afterdoc
mit
Mathematical: statsmodels, lmfit,
# checking statsmodels import numpy as np import matplotlib.pyplot as plt plt.style.use('ggplot') import statsmodels.api as sm data = sm.datasets.anes96.load_pandas() party_ID = np.arange(7) labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", "Independent-Independent", "Independent-Republican", "Weak Republican", "Strong Republican"] plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible plt.rcParams['figure.figsize'] = (6.0, 4.0) # make plot larger in notebook age = [data.exog['age'][data.endog == id] for id in party_ID] fig = plt.figure() ax = fig.add_subplot(111) plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small', 'label_rotation':30} sm.graphics.beanplot(age, ax=ax, labels=labels, plot_opts=plot_opts) ax.set_xlabel("Party identification of respondent") ax.set_ylabel("Age") plt.show() # lmfit test (from http://nbviewer.ipython.org/github/lmfit/lmfit-py/blob/master/examples/lmfit-model.ipynb) import numpy as np import matplotlib.pyplot as plt def decay(t, N, tau): return N*np.exp(-t/tau) t = np.linspace(0, 5, num=1000) data = decay(t, 7, 3) + np.random.randn(*t.shape) from lmfit import Model model = Model(decay, independent_vars=['t']) result = model.fit(data, t=t, N=10, tau=1) fig = plt.figure() # necessary to separate from previous ploot with %matplotlib widget plt.plot(t, data) # data plt.plot(t, decay(t=t, **result.values), color='orange', linewidth=5) # best-fit model
docs/Winpython_checker.ipynb
stonebig/winpython_afterdoc
mit
DataFrames: Pandas, Dask
#Pandas import pandas as pd import numpy as np idx = pd.date_range('2000', '2005', freq='d', closed='left') datas = pd.DataFrame({'Color': [ 'green' if x> 1 else 'red' for x in np.random.randn(len(idx))], 'Measure': np.random.randn(len(idx)), 'Year': idx.year}, index=idx.date) datas.head()
docs/Winpython_checker.ipynb
stonebig/winpython_afterdoc
mit
Split / Apply / Combine Split your data into multiple independent groups. Apply some function to each group. Combine your groups back into a single data object.
datas.query('Measure > 0').groupby(['Color','Year']).size().unstack()
docs/Winpython_checker.ipynb
stonebig/winpython_afterdoc
mit
Web Scraping: Beautifulsoup
# checking Web Scraping: beautifulsoup and requests import requests from bs4 import BeautifulSoup URL = 'http://en.wikipedia.org/wiki/Franklin,_Tennessee' req = requests.get(URL, headers={'User-Agent' : "Mining the Social Web"}) soup = BeautifulSoup(req.text, "lxml") geoTag = soup.find(True, 'geo') if geoTag and len(geoTag) > 1: lat = geoTag.find(True, 'latitude').string lon = geoTag.find(True, 'longitude').string print ('Location is at', lat, lon) elif geoTag and len(geoTag) == 1: (lat, lon) = geoTag.string.split(';') (lat, lon) = (lat.strip(), lon.strip()) print ('Location is at', lat, lon) else: print ('No location found')
docs/Winpython_checker.ipynb
stonebig/winpython_afterdoc
mit
Operations Research: Pulp
# Pulp example : minimizing the weight to carry 99 pennies # (from Philip I Thomas) # see https://www.youtube.com/watch?v=UmMn-N5w-lI#t=995 # Import PuLP modeler functions from pulp import * # The prob variable is created to contain the problem data prob = LpProblem("99_pennies_Problem",LpMinimize) # Variables represent how many of each coin we want to carry pennies = LpVariable("Number_of_pennies",0,None,LpInteger) nickels = LpVariable("Number_of_nickels",0,None,LpInteger) dimes = LpVariable("Number_of_dimes",0,None,LpInteger) quarters = LpVariable("Number_of_quarters",0,None,LpInteger) # The objective function is added to 'prob' first # we want to minimize (LpMinimize) this prob += 2.5 * pennies + 5 * nickels + 2.268 * dimes + 5.670 * quarters, "Total_coins_Weight" # We want exactly 99 cents prob += 1 * pennies + 5 * nickels + 10 * dimes + 25 * quarters == 99, "" # The problem data is written to an .lp file prob.writeLP("99cents.lp") prob.solve() # print ("status",LpStatus[prob.status] ) print ("Minimal Weight to carry exactly 99 pennies is %s grams" % value(prob.objective)) # Each of the variables is printed with it's resolved optimum value for v in prob.variables(): print (v.name, "=", v.varValue)
docs/Winpython_checker.ipynb
stonebig/winpython_afterdoc
mit
Deep Learning: see tutorial-first-neural-network-python-keras Symbolic Calculation: sympy
# checking sympy import sympy a, b =sympy.symbols('a b') e=(a+b)**5 e.expand()
docs/Winpython_checker.ipynb
stonebig/winpython_afterdoc
mit
SQL tools: sqlite, Ipython-sql, sqlite_bro, baresql, db.py
# checking Ipython-sql, sqlparse, SQLalchemy %load_ext sql %%sql sqlite:///.baresql.db DROP TABLE IF EXISTS writer; CREATE TABLE writer (first_name, last_name, year_of_death); INSERT INTO writer VALUES ('William', 'Shakespeare', 1616); INSERT INTO writer VALUES ('Bertold', 'Brecht', 1956); SELECT * , sqlite_version() as sqlite_version from Writer order by Year_of_death # checking baresql from __future__ import print_function, unicode_literals, division # line needed only if Python2.7 from baresql import baresql bsql = baresql.baresql(connection="sqlite:///.baresql.db") bsqldf = lambda q: bsql.df(q, dict(globals(),**locals())) users = ['Alexander', 'Billy', 'Charles', 'Danielle', 'Esmeralda', 'Franz', 'Greg'] # We use the python 'users' list like a SQL table sql = "select 'Welcome ' || c0 || ' !' as say_hello, length(c0) as name_length from users$$ where c0 like '%a%' " bsqldf(sql) # Transfering Datas to sqlite, doing transformation in sql, going back to Pandas and Matplotlib bsqldf(''' select Color, Year, count(*) as size from datas$$ where Measure > 0 group by Color, Year''' ).set_index(['Year', 'Color']).unstack().plot(kind='bar') # checking db.py from db import DB db=DB(dbtype="sqlite", filename=".baresql.db") db.query("select sqlite_version() as sqlite_version ;") db.tables # checking sqlite_bro: this should lanch a separate non-browser window with sqlite_bro's welcome !cmd start cmd /C sqlite_bro # pyodbc or pypyodbc or ceODBC try: import pyodbc except ImportError: import pypyodbc as pyodbc # on PyPy, there is no pyodbc currently # look for pyodbc providers sources = pyodbc.dataSources() dsns = list(sources.keys()) sl = [' %s [%s]' % (dsn, sources[dsn]) for dsn in dsns] print("pyodbc Providers: (beware 32/64 bit driver and python version must match)\n", '\n'.join(sl)) # pythonnet import clr clr.AddReference("System.Data") clr.AddReference('System.Data.Common') import System.Data.OleDb as ADONET import System.Data.Odbc as ODBCNET import System.Data.Common as DATACOM table = DATACOM.DbProviderFactories.GetFactoryClasses() print("\n .NET Providers: (beware 32/64 bit driver and python version must match)") for row in table.Rows: print(" %s" % row[table.Columns[0]]) print(" ",[row[column] for column in table.Columns if column != table.Columns[0]])
docs/Winpython_checker.ipynb
stonebig/winpython_afterdoc
mit
Qt libraries Demo See Dedicated Qt Libraries Demo Wrap-up
# optional scipy full test (takes up to 10 minutes) #!cmd /C start cmd /k python.exe -c "import scipy;scipy.test()" %pip list !jupyter labextension list !pip check !pipdeptree !pipdeptree -p pip
docs/Winpython_checker.ipynb
stonebig/winpython_afterdoc
mit
IP Addresses of Compute Nodes
ips = saz.arm.view_info()
ipynb/Use Case - NIST Pedestrian and Face Detection on Simple Azure (under development).ipynb
lee212/simpleazure
gpl-3.0
Load Ansible API with IPs
from simpleazure.ansible_api import AnsibleAPI ansible_client = AnsibleAPI(ips)
ipynb/Use Case - NIST Pedestrian and Face Detection on Simple Azure (under development).ipynb
lee212/simpleazure
gpl-3.0
Download Ansible Playbooks from Github The ansible scripts for Pedestrian and Face Detection is here: https://github.com/futuresystems/pedestrian-and-face-detection. We clone the repository using Github command line tools.
from simpleazure.github_cli import GithubCLI git_client = GithubCLI() git_client.set_repo('https://github.com/futuresystems/pedestrian-and-face-detection') git_client.clone()
ipynb/Use Case - NIST Pedestrian and Face Detection on Simple Azure (under development).ipynb
lee212/simpleazure
gpl-3.0
Install Software Stacks to Targeted VMs
ansible_client.playbook(git_client.path + "/site.yml") ansible_client.run()
ipynb/Use Case - NIST Pedestrian and Face Detection on Simple Azure (under development).ipynb
lee212/simpleazure
gpl-3.0
Check shed words pattern-matching requiremnts Ref: - Dodds, P. S., Harris, K. D., Kloumann, I. M., Bliss, C. A., & Danforth, C. M. (2011). Temporal patterns of happiness and information in a global social network: Hedonometrics and Twitter. PloS one, 6(12), e26752. Notes: - See 2.1 Algorithm for Hedonometer P3 - See Methods P23 Build shed words freq dicts for topic docs
""" Check all shed words """ if 1 == 1: ind_shed_word_dict = pd.read_pickle(config.IND_SHED_WORD_DICT_PKL) print(ind_shed_word_dict.values())
develop/20171019-daheng-build_shed_words_freq_dicts.ipynb
adamwang0705/cross_media_affect_analysis
mit
Build single shed words freq dict for topic_news docs Result single dict format (for all topic_news docs) {topic_ind_0: { news_native_id_0_0: {shed_word_0_ind: shed_word_0_freq, shed_word_1_ind: shed_word_1_freq, ...}, news_native_id_0_1: {shed_word_0_ind: shed_word_0_freq, shed_word_1_ind: shed_word_1_freq, ...}, ...}, topic_ind_1: { news_native_id_1_0: {shed_word_0_ind: shed_word_0_freq, shed_word_1_ind: shed_word_1_freq, ...}, news_native_id_1_1: {shed_word_0_ind: shed_word_0_freq, shed_word_1_ind: shed_word_1_freq, ...}, ...}, ...} Build single shed words freq dict for all topic_news docs
%%time """ Build single shed words freq dict for all topic_news docs Register TOPICS_NEWS_SHED_WORDS_FREQ_DICT_PKL = os.path.join(DATA_DIR, 'topics_news_shed_words_freq.dict.pkl') in config """ if 0 == 1: topics_news_shed_words_freq_dict = {} for topic_ind, topic in enumerate(config.MANUALLY_SELECTED_TOPICS_LST): localtime = time.asctime(time.localtime(time.time())) print('({}/{}) processing topic: {} ... {}'.format(topic_ind+1, len(config.MANUALLY_SELECTED_TOPICS_LST), topic['name'], localtime)) topic_shed_words_freq_dict = {} ''' Load shed_word and shed_word_ind mapping pkls ''' ind_shed_word_dict = pd.read_pickle(config.IND_SHED_WORD_DICT_PKL) shed_word_ind_dict = pd.read_pickle(config.SHED_WORD_IND_DICT_PKL) shed_words_set = set(ind_shed_word_dict.values()) ''' Load topic_news doc ''' csv.register_dialect('topics_docs_line', delimiter='\t', doublequote=True, quoting=csv.QUOTE_ALL) topic_news_csv_file = os.path.join(config.TOPICS_DOCS_DIR, '{}-{}.news.csv'.format(topic_ind, topic['name'])) with open(topic_news_csv_file, 'r') as f: reader = csv.DictReader(f, dialect='topics_docs_line') ''' Count shed words freq for each tweet ''' # lazy load for row in reader: news_native_id = int(row['news_native_id']) news_doc = row['news_doc'] news_doc_shed_words_freq_dict = utilities.count_news_doc_shed_words_freq(news_doc, ind_shed_word_dict, shed_word_ind_dict, shed_words_set) topic_shed_words_freq_dict[news_native_id] = news_doc_shed_words_freq_dict topics_news_shed_words_freq_dict[topic_ind] = topic_shed_words_freq_dict ''' Make pkl for result single dict ''' with open(config.TOPICS_NEWS_SHED_WORDS_FREQ_DICT_PKL, 'wb') as f: pickle.dump(topics_news_shed_words_freq_dict, f)
develop/20171019-daheng-build_shed_words_freq_dicts.ipynb
adamwang0705/cross_media_affect_analysis
mit
Check basic statistics
""" Print out sample news shed_words_freq_dicts inside single topic """ if 0 == 1: target_topic_ind = 0 with open(config.TOPICS_NEWS_SHED_WORDS_FREQ_DICT_PKL, 'rb') as f: topics_news_shed_words_freq_dict = pickle.load(f) count = 0 for news_native_id, news_doc_shed_words_freq_dict in topics_news_shed_words_freq_dict[target_topic_ind].items(): print('news_native_id: {}'.format(news_native_id)) print('\t{}'.format(news_doc_shed_words_freq_dict)) news_doc_shed_words_len = sum(news_doc_shed_words_freq_dict.values()) print('\tLEN: {}'.format(news_doc_shed_words_len)) count += 1 if count >= 5: break %%time """ Check total shed words length of this topic_news doc """ if 0 == 1: topic_news_shed_words_len = sum([sum(news_doc_shed_words_freq_dict.values()) for news_doc_shed_words_freq_dict in topics_news_shed_words_freq_dict[target_topic_ind].values()]) print('Total shed words length of this topic_news doc: {}'.format(topic_news_shed_words_len))
develop/20171019-daheng-build_shed_words_freq_dicts.ipynb
adamwang0705/cross_media_affect_analysis
mit
Build shed words freq dicts for each topic_tweets doc separately Result dict format (for each given topic_tweets doc) {tweet_id_0_0: {shed_word_0_ind: shed_word_0_freq, shed_word_1_ind: shed_word_1_freq, ...}, tweet_id_0_1: {shed_word_0_ind: shed_word_0_freq, shed_word_1_ind: shed_word_1_freq, ...}, ...} Build shed words freq dict for each topic separately
%%time """ Build shed words freq dict for each topic separately Register TOPICS_TWEETS_SHED_WORDS_FREQ_DICT_PKLS_DIR = os.path.join(DATA_DIR, 'topics_tweets_shed_words_freq_dict_pkls') in config Note: - Number of tweets is large. Process each topic_tweets doc individually to avoid crash - Execute second time for updated topic_tweets docs """ if 0 == 1: for topic_ind, topic in enumerate(config.MANUALLY_SELECTED_TOPICS_LST): localtime = time.asctime(time.localtime(time.time())) print('({}/{}) processing topic: {} ... {}'.format(topic_ind+1, len(config.MANUALLY_SELECTED_TOPICS_LST), topic['name'], localtime)) topic_shed_words_freq_dict = {} ''' Load shed_word and shed_word_ind mapping pkls ''' ind_shed_word_dict = pd.read_pickle(config.IND_SHED_WORD_DICT_PKL) shed_word_ind_dict = pd.read_pickle(config.SHED_WORD_IND_DICT_PKL) shed_words_set = set(ind_shed_word_dict.values()) ''' Load topic_tweets doc ''' csv.register_dialect('topics_docs_line', delimiter='\t', doublequote=True, quoting=csv.QUOTE_ALL) topic_tweets_csv_file = os.path.join(config.TOPICS_DOCS_DIR, '{}-{}.updated.tweets.csv'.format(topic_ind, topic['name'])) with open(topic_tweets_csv_file, 'r') as f: reader = csv.DictReader(f, dialect='topics_docs_line') ''' Count shed words freq for each tweet ''' # lazy load for row in reader: tweet_id = int(row['tweet_id']) tweet_text = row['tweet_text'] tweet_shed_words_freq_dict = utilities.count_tweet_shed_words_freq(tweet_text, ind_shed_word_dict, shed_word_ind_dict, shed_words_set) topic_shed_words_freq_dict[tweet_id] = tweet_shed_words_freq_dict ''' Make pkl for result dict file ''' topic_tweets_shed_words_freq_dict_pkl_file = os.path.join(config.TOPICS_TWEETS_SHED_WORDS_FREQ_DICT_PKLS_DIR, '{}.updated.dict.pkl'.format(topic_ind)) with open(topic_tweets_shed_words_freq_dict_pkl_file, 'wb') as f: pickle.dump(topic_shed_words_freq_dict, f)
develop/20171019-daheng-build_shed_words_freq_dicts.ipynb
adamwang0705/cross_media_affect_analysis
mit
Check basic statistics
%%time """ Print out sample tweet shed_words_freq_dicts inside single topic """ if 0 == 1: target_topic_ind = 0 topic_tweets_shed_words_freq_dict_pkl_file = os.path.join(config.TOPICS_TWEETS_SHED_WORDS_FREQ_DICT_PKLS_DIR, '{}.updated.dict.pkl'.format(target_topic_ind)) with open(topic_tweets_shed_words_freq_dict_pkl_file, 'rb') as f: topic_tweets_shed_words_freq_dict_tmp = pickle.load(f) count = 0 for tweet_id, tweet_shed_words_freq_dict in topic_tweets_shed_words_freq_dict_tmp.items(): print('tweet_id: {}'.format(tweet_id)) print('\t{}'.format(tweet_shed_words_freq_dict)) tweet_shed_words_len = sum(tweet_shed_words_freq_dict.values()) print('\tLEN: {}'.format(tweet_shed_words_len)) count += 1 if count >= 20: break %%time """ Check total shed words length of a topic_tweets doc """ if 0 == 1: topic_tweets_shed_words_len = sum([sum(tweet_shed_words_freq_dict.values()) for tweet_shed_words_freq_dict in topic_tweets_shed_words_freq_dict_tmp.values()]) print('Total shed words length of this topic_tweets_doc: {}'.format(topic_tweets_shed_words_len))
develop/20171019-daheng-build_shed_words_freq_dicts.ipynb
adamwang0705/cross_media_affect_analysis
mit
Linear classifier on sensor data with plot patterns and filters Decoding, a.k.a MVPA or supervised machine learning applied to MEG and EEG data in sensor space. Fit a linear classifier with the LinearModel object providing topographical patterns which are more neurophysiologically interpretable [1]_ than the classifier filters (weight vectors). The patterns explain how the MEG and EEG data were generated from the discriminant neural sources which are extracted by the filters. Note patterns/filters in MEG data are more similar than EEG data because the noise is less spatially correlated in MEG than EEG. References .. [1] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D., Blankertz, B., & Bießmann, F. (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage, 87, 96–110. doi:10.1016/j.neuroimage.2013.10.067
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Romain Trachel <trachelr@gmail.com> # Jean-Remi King <jeanremi.king@gmail.com> # # License: BSD (3-clause) import mne from mne import io, EvokedArray from mne.datasets import sample from mne.decoding import Vectorizer, get_coef from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression from sklearn.pipeline import make_pipeline # import a linear classifier from mne.decoding from mne.decoding import LinearModel print(__doc__) data_path = sample.data_path()
0.14/_downloads/plot_linear_model_patterns.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.1, 0.4 event_id = dict(aud_l=1, vis_l=3) # Setup for reading the raw data raw = io.read_raw_fif(raw_fname, preload=True) raw.filter(.5, 25) events = mne.read_events(event_fname) # Read epochs epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, decim=4, baseline=None, preload=True) labels = epochs.events[:, -1] # get MEG and EEG data meg_epochs = epochs.copy().pick_types(meg=True, eeg=False) meg_data = meg_epochs.get_data().reshape(len(labels), -1)
0.14/_downloads/plot_linear_model_patterns.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Decoding in sensor space using a LogisticRegression classifier
clf = LogisticRegression() scaler = StandardScaler() # create a linear model with LogisticRegression model = LinearModel(clf) # fit the classifier on MEG data X = scaler.fit_transform(meg_data) model.fit(X, labels) # Extract and plot spatial filters and spatial patterns for name, coef in (('patterns', model.patterns_), ('filters', model.filters_)): # We fitted the linear model onto Z-scored data. To make the filters # interpretable, we must reverse this normalization step coef = scaler.inverse_transform([coef])[0] # The data was vectorized to fit a single model across all time points and # all channels. We thus reshape it: coef = coef.reshape(len(meg_epochs.ch_names), -1) # Plot evoked = EvokedArray(coef, meg_epochs.info, tmin=epochs.tmin) evoked.plot_topomap(title='MEG %s' % name)
0.14/_downloads/plot_linear_model_patterns.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's do the same on EEG data using a scikit-learn pipeline
X = epochs.pick_types(meg=False, eeg=True) y = epochs.events[:, 2] # Define a unique pipeline to sequentially: clf = make_pipeline( Vectorizer(), # 1) vectorize across time and channels StandardScaler(), # 2) normalize features across trials LinearModel(LogisticRegression())) # 3) fits a logistic regression clf.fit(X, y) # Extract and plot patterns and filters for name in ('patterns_', 'filters_'): # The `inverse_transform` parameter will call this method on any estimator # contained in the pipeline, in reverse order. coef = get_coef(clf, name, inverse_transform=True) evoked = EvokedArray(coef, epochs.info, tmin=epochs.tmin) evoked.plot_topomap(title='EEG %s' % name[:-1])
0.14/_downloads/plot_linear_model_patterns.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This cell can be used for all data sets except colon. colon is special because it has 3 types of events instead of just 2. Just change the first line to run a different data set.
#data = ds._pbc #data = ds._lung #data = ds._nwtco data = ds._flchain df = pd.read_csv(data['filename'][:-4] + "_org.csv", sep=None, engine='python') k = 4 # flchain has three guys at zero, remove them if 'flchain' in data['filename']: df = df[(df[data['timecol']] > 0)] # Need shape later n, d = df.shape # Random reordering df = df.reindex(np.random.permutation(df.index)) df.sort(data['eventcol'], inplace=True) assignments = np.array((n // k + 1) * list(range(0, k))) assignments = assignments[:n] print(assignments.shape) print(df.shape) # Create a new column that specifies set df['set'] = 1 # 0 is testing df.loc[assignments == 0, 'set'] = 'testing' # rest is training df.loc[assignments != 0, 'set'] = 'training' print("Training size:", np.sum(df['set'] == 'training')) print("Testing size:", np.sum(df['set'] == 'testing')) df = df.reindex(np.sort(df.index))
DataSetStratification.ipynb
spacecowboy/article-annriskgroups-source
gpl-3.0
Print the labeled to data to a new file.
fname = data['filename'] print(fname) df.to_csv(fname, na_rep='NA', index=False)
DataSetStratification.ipynb
spacecowboy/article-annriskgroups-source
gpl-3.0
Colon Is kind of special. It has 3 events where two must be combined before stratification is possible.
data = ds._colon df = pd.read_csv(data['filename'], sep=None, engine='python') n, d = df.shape k = 4 # Construct lists of events, censored events = [] censored = [] for i in df['id'].unique(): x = ((df['id'] == i) & (df['etype'] == 1)) if df[x]['status'].sum() < 1: censored.append(i) else: events.append(i) trainingids = [] testingids = [] for d in [events, censored]: ids = np.random.permutation(d) n = len(ids) k = 4 assignments = np.array((n // k + 1) * list(range(0, k))) assignments = assignments[:n] testingids.extend(ids[assignments == 0]) trainingids.extend(ids[assignments != 0]) df['set'] = 1 for i in trainingids: which = (df['id'] == i) df.loc[which, 'set'] = 'training' for i in testingids: which = (df['id'] == i) df.loc[which, 'set'] = 'testing' print("Training size:", np.sum(df['set'] == 'training')) print("Testing size:", np.sum(df['set'] == 'testing')) df
DataSetStratification.ipynb
spacecowboy/article-annriskgroups-source
gpl-3.0
Print data to file.
fname = data['filename'][:-8] + '.csv' print(fname) df.to_csv(fname, na_rep='NA', index=False)
DataSetStratification.ipynb
spacecowboy/article-annriskgroups-source
gpl-3.0
Model results Rule learning and rule application in the matching task Rule Learning > Rule Application
l1cope="3" l2cope="1" l3cope="2" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Rule Application > Rule Learning
l1cope="3" l2cope="1" l3cope="1" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Rule Learning > Baseline
l1cope="2" l2cope="1" l3cope="1" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Baseline > Rule Learning
l1cope="2" l2cope="1" l3cope="2" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Rule Application > Baseline
l1cope="1" l2cope="1" l3cope="1" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Baseline > Rule Application
l1cope="1" l2cope="1" l3cope="2" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() Image(sliced_img) render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Rule learning and rule application in the classification task Rule Learning > Rule Application
l1cope="3" l2cope="2" l3cope="2" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Rule Learning > Baseline
l1cope="2" l2cope="2" l3cope="1" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Baseline > Rule Learning
l1cope="2" l2cope="2" l3cope="2" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Rule Application > Baseline
l1cope="1" l2cope="2" l3cope="1" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Baseline > Rule Application
l1cope="1" l2cope="2" l3cope="2" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Rule learning in the matching and classification tasks Matching > Classification
l1cope="2" l2cope="3" l3cope="1" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Classification > Matching
l1cope="2" l2cope="3" l3cope="2" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Rule application in the matching and classification tasks Matching > Classification
l1cope="1" l2cope="3" l3cope="1" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Classification > Matching
l1cope="1" l2cope="3" l3cope="2" sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths() render(html_cl,[wb_img,cluster_corr])
thresholded_results/RulesFPC/model_1_FPC/model_1_FPC.ipynb
dpaniukov/RulesFPC
mit
Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
path = "data/dogscats/" #path = "data/dogscats/sample/"
deeplearning1/nbs/lesson1.ipynb
sainathadapa/fastai-courses
apache-2.0
We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
import utils; reload(utils) from utils import plots
deeplearning1/nbs/lesson1.ipynb
sainathadapa/fastai-courses
apache-2.0
Use a pretrained VGG model with our Vgg16 class Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy. We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward. The punchline: state of the art custom model in 7 lines of code Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
# As large as you can, but no larger than 64 is recommended. # If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this. batch_size=64 # Import our class, and instantiate import vgg16; reload(vgg16) from vgg16 import Vgg16 vgg = Vgg16() # Grab a few images at a time for training and validation. # NB: They must be in subdirectories named based on their category batches = vgg.get_batches(path+'train', batch_size=batch_size) val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2) vgg.finetune(batches) vgg.fit(batches, val_batches, nb_epoch=1)
deeplearning1/nbs/lesson1.ipynb
sainathadapa/fastai-courses
apache-2.0
(BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.) Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
imgs,labels = next(batches)
deeplearning1/nbs/lesson1.ipynb
sainathadapa/fastai-courses
apache-2.0
That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth. Next up, we'll dig one level deeper to see what's going on in the Vgg16 class. Create a VGG model from scratch in Keras For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes. Model setup We need to import all the modules we'll be using from numpy, scipy, and keras:
from numpy.random import random, permutation from scipy import misc, ndimage from scipy.ndimage.interpolation import zoom import keras from keras import backend as K from keras.utils.data_utils import get_file from keras.models import Sequential, Model from keras.layers.core import Flatten, Dense, Dropout, Lambda from keras.layers import Input from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D from keras.optimizers import SGD, RMSprop from keras.preprocessing import image
deeplearning1/nbs/lesson1.ipynb
sainathadapa/fastai-courses
apache-2.0
Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json' # Keras' get_file() is a handy function that downloads files, and caches them for re-use later fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models') with open(fpath) as f: class_dict = json.load(f) # Convert dictionary with string indexes into an array classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
deeplearning1/nbs/lesson1.ipynb
sainathadapa/fastai-courses
apache-2.0
Here's a few examples of the categories we just imported:
classes[:5]
deeplearning1/nbs/lesson1.ipynb
sainathadapa/fastai-courses
apache-2.0
Model creation Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture. VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
def ConvBlock(layers, model, filters): for i in range(layers): model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(filters, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2)))
deeplearning1/nbs/lesson1.ipynb
sainathadapa/fastai-courses
apache-2.0
...and here's the fully-connected definition.
def FCBlock(model): model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5))
deeplearning1/nbs/lesson1.ipynb
sainathadapa/fastai-courses
apache-2.0
When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
# Mean of each channel as provided by VGG researchers vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1)) def vgg_preprocess(x): x = x - vgg_mean # subtract mean return x[:, ::-1] # reverse axis bgr->rgb
deeplearning1/nbs/lesson1.ipynb
sainathadapa/fastai-courses
apache-2.0
Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
def VGG_16(): model = Sequential() model.add(Lambda(vgg_preprocess, input_shape=(3,224,224))) ConvBlock(2, model, 64) ConvBlock(2, model, 128) ConvBlock(3, model, 256) ConvBlock(3, model, 512) ConvBlock(3, model, 512) model.add(Flatten()) FCBlock(model) FCBlock(model) model.add(Dense(1000, activation='softmax')) return model
deeplearning1/nbs/lesson1.ipynb
sainathadapa/fastai-courses
apache-2.0