markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Apply a rule that clones a node 'a'
pattern = NXGraph() pattern.add_node("a") rule = Rule.from_transform(pattern) _, rhs_clone = rule.inject_clone_node("a") rhs_instance, rollback_commit = g.rewrite(rule, {"a": rhs_instance["a"]}, message="Clone a") plot_graph(g.graph)
examples/Tutorial_graph_audit.ipynb
Kappa-Dev/ReGraph
mit
Create a new branch 'test'
g.branch("test")
examples/Tutorial_graph_audit.ipynb
Kappa-Dev/ReGraph
mit
In this branch apply the rule that adds a new node 'd' and connects it with an edge to one of the cloned 'a' nodes
pattern = NXGraph() pattern.add_node("a") rule = Rule.from_transform(pattern) rule.inject_add_node("d") rule.inject_add_edge("a", "d") g.rewrite(rule, {"a": rhs_instance[rhs_clone]}, message="Add d -> clone of a") plot_graph(g.graph)
examples/Tutorial_graph_audit.ipynb
Kappa-Dev/ReGraph
mit
Switch back to 'master'
g.switch_branch("master")
examples/Tutorial_graph_audit.ipynb
Kappa-Dev/ReGraph
mit
Remove a node 'a'
pattern = NXGraph() pattern.add_node("a") rule = Rule.from_transform(pattern) rule.inject_remove_node("a") rhs_instance, _ = g.rewrite(rule, {"a": rhs_instance["a"]}, message="Remove a") plot_graph(g.graph)
examples/Tutorial_graph_audit.ipynb
Kappa-Dev/ReGraph
mit
Merge the branch 'dev' into 'master'
g.merge_with("dev") plot_graph(g.graph)
examples/Tutorial_graph_audit.ipynb
Kappa-Dev/ReGraph
mit
Merge 'test' into 'master'
g.merge_with("test") plot_graph(g.graph)
examples/Tutorial_graph_audit.ipynb
Kappa-Dev/ReGraph
mit
We can inspect the version control object in more details and look at its attribute _revision_graph, whose nodes represent the commits and whose edges represent graph deltas between different commits (basically, rewriting rules that constitute commits). Here we can see that on the nodes of the revision graph are stored branch names to which commits belong and user specified commit messages.
for n, attrs in g._revision_graph.nodes(data=True): print("Node ID: ", n) print("Attributes: ") print("\t", attrs) # Pretty-print the history g.print_history()
examples/Tutorial_graph_audit.ipynb
Kappa-Dev/ReGraph
mit
Now we can rollback to some previous commit (commit where we first cloned the node 'a')
g.rollback(rollback_commit) print("Branches: ", g.branches()) print("Current branch '{}'".format(g.current_branch())) print("Updated revision graph:") g.print_history() print("Current graph object") plot_graph(g.graph) print_graph(g.graph) g.switch_branch("branch") g.rollback(branch_commit) g.print_history() print(g._heads) plot_graph(g.graph) g.switch_branch("master") plot_graph(g.graph) g.merge_with("branch") plot_graph(g.graph)
examples/Tutorial_graph_audit.ipynb
Kappa-Dev/ReGraph
mit
I have generated iupred results for the whole proteome since we didn't have the scores.
def read_iupred_results(fileName): ''' function read files containing the scores from iupred into the hash results INPUT: fileName results (hash) ''' results = {} f = open(fileName, "r") while True: try: k,v = next(f), next(f) k = k.strip()[1:] results[k] = v.strip().split(',')[:-1] except StopIteration: break f.close() return results import os,re files = [i for i in os.listdir('../scripts/iupred/') if re.search(".results$",i)] disorders = {} for i in files: d = read_iupred_results('../scripts/iupred/'+i) disorders.update(d)
Disorder.ipynb
aerijman/Transcriptional-Activation-Domains
mit
These are old predictions that include sequences and scores from our deep learning model
# forgot to add the confidence of secondary_structure_oredictions... path = '../scripts/fastas/' tmp, tnp = [], [] for f in [i for i in os.listdir(path) if i[-11:]==".output.csv"]: predsName = f + ".predictions.npz" df = pd.read_csv(path + f, index_col=0) tmp.append(df[['sequence','secondStruct','disorder']]) nf = np.load(path + predsName) tnp.append(nf[nf.files[0]]) predictions = np.hstack(tnp) df = pd.concat(tmp) # finally join all fields into a single data structure to facilitate further analysis df['predictions'] = predictions
Disorder.ipynb
aerijman/Transcriptional-Activation-Domains
mit
Here I joined both datasets
df2 = pd.DataFrame([disorders]).T idx = df2.index.intersection(df.index) df2 = df2.loc[idx] df = pd.concat([df.loc[idx],df2], axis=1) df.columns = ['sequence', 'secondStruct', 'disorder', 'predictions', 'iupred'] del(df2)
Disorder.ipynb
aerijman/Transcriptional-Activation-Domains
mit
Have to define the set of Transcription factors or Nuclear proteins
## SGD ## # collect data from SGD SGD = pd.read_csv('https://downloads.yeastgenome.org/curation/chromosomal_feature/SGD_features.tab', index_col=3, sep='\t', header=None) SGD = SGD[SGD[1]=='ORF'][4] ## TF ## # Steve's list of TFs # long list including potential NON-TF tf_full = pd.read_csv('../data/TFs.csv') tf_full = tf_full['Systematic name'].values # short list excluding potential False TF tf_short = pd.read_csv('../data/TFs_small.csv') tf_short = tf_short['Systematic name'].values ## Nuclear ## # Are tf enriched in the Nucleus? localization = pd.read_csv('../data/localization/proteomesummarylatestversion_localisation.csv', index_col=0) X = localization.iloc[:,1] nuclear = [i for i in set(X) if re.search("nucl",i)] X = pd.DataFrame([1 if i in nuclear else 0 for i in X], index=localization.index, columns=['loc']) nuclear = X[X['loc']==1].index total_idx = df.index.intersection(X.index) nuclear_idx = nuclear.intersection(total_idx) tf_full_idx = set(tf_full).intersection(total_idx) tf_short_idx = set(tf_short).intersection(total_idx) print('{} in tf_full\n{} in tf_short\n{} in total\n{} in nuclear\n'.format( len(tf_full_idx), len(tf_short_idx), len(total_idx), len(nuclear_idx)))
Disorder.ipynb
aerijman/Transcriptional-Activation-Domains
mit
Predict TADs from the proteome
# load NN model and weights from keras.models import model_from_json # open json model and weights with open("../models/deep_model.json", "r") as json_file: json_model = json_file.read() deep_model = model_from_json(json_model) deep_model.load_weights("../models/deep_model.h5") # set cutoff to predict TADs in the proteome cutoff=0.8 results = np.zeros(shape=(df.shape[0],4)) for n,prot in enumerate(df.predictions): results[n] = predict_motif_statistics(prot, cutoff) results = pd.DataFrame(results, index=df.index, columns = ['length', 'start_position', 'gral_mean', 'mean_longest_region'])
Disorder.ipynb
aerijman/Transcriptional-Activation-Domains
mit
In parsed disorder scores there are null values that have to be excluded
fixed_disorder = [] for n,i in enumerate(df.iupred.values): i = [t for t in i if t!=""] fixed_disorder.append(np.array(i).astype(float)) df.iupred = fixed_disorder lenCutoff = 5 # Threshold for defining a potential TAD (more than 5 contiguous residues with score.0.8) flanking = 100 # How many points to consider bins_tad = 20 # Pure legacy now. It's to show more clearly the TAD in the figure # Build distribution of lengths to use building the null hypothesis TADs_idx = results[results.length>lenCutoff].index.dropna().intersection(df.index) lengths = np.array([len(i) for i in df.loc[TADs_idx].sequence.values]) lengths = np.hstack([lengths]*10) # allow for a bigger sampling to build null hypothesis np.random.shuffle(lengths) # build disorder and helicity vectors dis_vector = np.hstack(df.loc[TADs_idx].iupred.values) dis_vector = np.hstack([np.ones(flanking), dis_vector, np.ones(flanking)]) # fix "N" and "C" terminal errors result_dis_pre = np.zeros(shape=(len(lengths), flanking)) result_dis_tad = np.zeros(shape=(len(lengths), bins_tad)) result_dis_post = np.zeros(shape=(len(lengths), flanking)) ########################################## ### Build null hypothesis distributions ## ########################################## # random start sites np.random.seed(42) # set random seed for reproducibility rand_starts = np.random.uniform(low=101,high=len(dis_vector)-1000, size=len(lengths)).astype(int) # Null Hypothesis disOrder and helIcity for n,(i,j) in enumerate(zip(rand_starts, lengths)): result_dis_pre[n] = dis_vector[i-flanking:i] result_dis_tad[n] = np.median(dis_vector[i-flanking:i+j-flanking]) result_dis_post[n] = dis_vector[i+j-flanking:i+j] dis = np.hstack([result_dis_pre, result_dis_tad, result_dis_post]).T medians_dis_random = np.array([np.percentile(i,50) for i in dis]) _25_dis_random = np.array([np.percentile(i,25) for i in dis]) _75_dis_random = np.array([np.percentile(i,75) for i in dis]) lenCutoff = 5 # Threshold for defining a potential TAD (more than 5 contiguous residues with score.0.8) flanking = 100 # How many points to consider bins_tad = 20 # Pure legacy now. It's to show more clearly the TAD in the figure TADs = results[results.length>lenCutoff] # use only the nuclear TADs TADs = TADs.loc[TADs.index.intersection(tf_short_idx)] result_dis_pre = np.zeros(shape=(len(TADs), flanking)) result_dis_tad = np.zeros(shape=(len(TADs), bins_tad)) result_dis_post = np.zeros(shape=(len(TADs), flanking)) # Null Hypothesis disOrder and helIcity for n,(i,j,k) in enumerate(zip(TADs.start_position.values.astype(int), TADs.length.values.astype(int), TADs.index.dropna())): dis = np.array(df.iupred.loc[k]).astype(float) dis = np.hstack([np.ones(flanking), dis, np.ones(flanking)]) # fix "N" and "C" terminal errors hel = df.secondStruct.loc[k] i +=100 # part of fixing the "N" terminal result_dis_pre[n] = dis[i-flanking:i] result_dis_tad[n] = np.median(dis[i:i+j]) result_dis_post[n] = dis[i+j:i+j+flanking] dis = np.hstack([result_dis_pre, result_dis_tad, result_dis_post]).T medians_dis_tad = np.array([np.percentile(i,50) for i in dis]) _25_dis_tad = np.array([np.percentile(i,25) for i in dis]) _75_dis_tad = np.array([np.percentile(i,75) for i in dis]) def plotit(ax, medians, _25, _75, title): ax.fill_between( np.arange(len(_25)),_75,_25, alpha=0.3, color='gray') ax.plot(medians, label="50%", lw=3, c='k') ax.set_xticks([100,120]) ax.set_xticklabels(["", ""]) ax.text(40, -0.1, "pre-tad") ax.text(100, -0.1, "TAD") ax.text(150, -0.1, "post-TAD") ax.set_title(title) #ax.set_ylim(-0.01,1) plt.figure(figsize=(11,5)) ax = plt.subplot(1,2,1) plotit(ax,medians_dis_tad, _25_dis_tad, _75_dis_tad, 'tads') plt.ylim(0,1) ax = plt.subplot(1,2,2) plotit(ax,medians_dis_random, _25_dis_random, _75_dis_random, 'random') plt.ylim(0,1)
Disorder.ipynb
aerijman/Transcriptional-Activation-Domains
mit
By default, librosa will resample the signal to 22050Hz. You can change this behavior by saying: librosa.load(audio_path, sr=44100) to resample at 44.1KHz, or librosa.load(audio_path, sr=None) to disable resampling. Mel spectrogram This first step will show how to compute a Mel spectrogram from an audio waveform.
# Let's make and display a mel-scaled power (energy-squared) spectrogram S = librosa.feature.melspectrogram(y, sr=sr, n_mels=128) # Convert to log scale (dB). We'll use the peak power as reference. log_S = librosa.logamplitude(S, ref_power=np.max) # Make a new figure plt.figure(figsize=(12,4)) # Display the spectrogram on a mel scale # sample rate and hop length parameters are used to render the time axis librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel') # Put a descriptive title on the plot plt.title('mel power spectrogram') # draw a color bar plt.colorbar(format='%+02.0f dB') # Make the figure layout compact plt.tight_layout()
Images/music/samples/libROSA/LibROSA_Demo.ipynb
scienceguyrob/Docker
gpl-3.0
Harmonic-percussive source separation Before doing any signal analysis, let's pull apart the harmonic and percussive components of the audio. This is pretty easy to do with the effects module.
y_harmonic, y_percussive = librosa.effects.hpss(y) # What do the spectrograms look like? # Let's make and display a mel-scaled power (energy-squared) spectrogram S_harmonic = librosa.feature.melspectrogram(y_harmonic, sr=sr) S_percussive = librosa.feature.melspectrogram(y_percussive, sr=sr) # Convert to log scale (dB). We'll use the peak power as reference. log_Sh = librosa.logamplitude(S_harmonic, ref_power=np.max) log_Sp = librosa.logamplitude(S_percussive, ref_power=np.max) # Make a new figure plt.figure(figsize=(12,6)) plt.subplot(2,1,1) # Display the spectrogram on a mel scale librosa.display.specshow(log_Sh, sr=sr, y_axis='mel') # Put a descriptive title on the plot plt.title('mel power spectrogram (Harmonic)') # draw a color bar plt.colorbar(format='%+02.0f dB') plt.subplot(2,1,2) librosa.display.specshow(log_Sp, sr=sr, x_axis='time', y_axis='mel') # Put a descriptive title on the plot plt.title('mel power spectrogram (Percussive)') # draw a color bar plt.colorbar(format='%+02.0f dB') # Make the figure layout compact plt.tight_layout()
Images/music/samples/libROSA/LibROSA_Demo.ipynb
scienceguyrob/Docker
gpl-3.0
Chromagram Next, we'll extract Chroma features to represent pitch class information.
# We'll use a CQT-based chromagram here. An STFT-based implementation also exists in chroma_cqt() # We'll use the harmonic component to avoid pollution from transients C = librosa.feature.chroma_cqt(y=y_harmonic, sr=sr) # Make a new figure plt.figure(figsize=(12,4)) # Display the chromagram: the energy in each chromatic pitch class as a function of time # To make sure that the colors span the full range of chroma values, set vmin and vmax librosa.display.specshow(C, sr=sr, x_axis='time', y_axis='chroma', vmin=0, vmax=1) plt.title('Chromagram') plt.colorbar() plt.tight_layout()
Images/music/samples/libROSA/LibROSA_Demo.ipynb
scienceguyrob/Docker
gpl-3.0
MFCC Mel-frequency cepstral coefficients are commonly used to represent texture or timbre of sound.
# Next, we'll extract the top 13 Mel-frequency cepstral coefficients (MFCCs) mfcc = librosa.feature.mfcc(S=log_S, n_mfcc=13) # Let's pad on the first and second deltas while we're at it delta_mfcc = librosa.feature.delta(mfcc) delta2_mfcc = librosa.feature.delta(mfcc, order=2) # How do they look? We'll show each in its own subplot plt.figure(figsize=(12, 6)) plt.subplot(3,1,1) librosa.display.specshow(mfcc) plt.ylabel('MFCC') plt.colorbar() plt.subplot(3,1,2) librosa.display.specshow(delta_mfcc) plt.ylabel('MFCC-$\Delta$') plt.colorbar() plt.subplot(3,1,3) librosa.display.specshow(delta2_mfcc, sr=sr, x_axis='time') plt.ylabel('MFCC-$\Delta^2$') plt.colorbar() plt.tight_layout() # For future use, we'll stack these together into one matrix M = np.vstack([mfcc, delta_mfcc, delta2_mfcc])
Images/music/samples/libROSA/LibROSA_Demo.ipynb
scienceguyrob/Docker
gpl-3.0
Beat tracking The beat tracker returns an estimate of the tempo (in beats per minute) and frame indices of beat events. The input can be either an audio time series (as we do below), or an onset strength envelope as calculated by librosa.onset.onset_strength().
# Now, let's run the beat tracker. # We'll use the percussive component for this part plt.figure(figsize=(12, 6)) tempo, beats = librosa.beat.beat_track(y=y_percussive, sr=sr) # Let's re-draw the spectrogram, but this time, overlay the detected beats plt.figure(figsize=(12,4)) librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel') # Let's draw transparent lines over the beat frames plt.vlines(librosa.frames_to_time(beats), 1, 0.5 * sr, colors='w', linestyles='-', linewidth=2, alpha=0.5) plt.axis('tight') plt.colorbar(format='%+02.0f dB') plt.tight_layout()
Images/music/samples/libROSA/LibROSA_Demo.ipynb
scienceguyrob/Docker
gpl-3.0
By default, the beat tracker will trim away any leading or trailing beats that don't appear strong enough. To disable this behavior, call beat_track() with trim=False.
print('Estimated tempo: %.2f BPM' % tempo) print('First 5 beat frames: ', beats[:5]) # Frame numbers are great and all, but when do those beats occur? print('First 5 beat times: ', librosa.frames_to_time(beats[:5], sr=sr)) # We could also get frame numbers from times by librosa.time_to_frames()
Images/music/samples/libROSA/LibROSA_Demo.ipynb
scienceguyrob/Docker
gpl-3.0
Beat-synchronous feature aggregation Once we've located the beat events, we can use them to summarize the feature content of each beat. This can be useful for reducing data dimensionality, and removing transient noise from the features.
# feature.sync will summarize each beat event by the mean feature vector within that beat M_sync = librosa.util.sync(M, beats) plt.figure(figsize=(12,6)) # Let's plot the original and beat-synchronous features against each other plt.subplot(2,1,1) librosa.display.specshow(M) plt.title('MFCC-$\Delta$-$\Delta^2$') # We can also use pyplot *ticks directly # Let's mark off the raw MFCC and the delta features plt.yticks(np.arange(0, M.shape[0], 13), ['MFCC', '$\Delta$', '$\Delta^2$']) plt.colorbar() plt.subplot(2,1,2) # librosa can generate axis ticks from arbitrary timestamps and beat events also librosa.display.specshow(M_sync, x_axis='time', x_coords=librosa.frames_to_time(librosa.util.fix_frames(beats))) plt.yticks(np.arange(0, M_sync.shape[0], 13), ['MFCC', '$\Delta$', '$\Delta^2$']) plt.title('Beat-synchronous MFCC-$\Delta$-$\Delta^2$') plt.colorbar() plt.tight_layout() # Beat synchronization is flexible. # Instead of computing the mean delta-MFCC within each beat, let's do beat-synchronous chroma # We can replace the mean with any statistical aggregation function, such as min, max, or median. C_sync = librosa.util.sync(C, beats, aggregate=np.median) plt.figure(figsize=(12,6)) plt.subplot(2, 1, 1) librosa.display.specshow(C, sr=sr, y_axis='chroma', vmin=0.0, vmax=1.0, x_axis='time') plt.title('Chroma') plt.colorbar() plt.subplot(2, 1, 2) librosa.display.specshow(C_sync, y_axis='chroma', vmin=0.0, vmax=1.0, x_axis='time', x_coords=librosa.frames_to_time(librosa.util.fix_frames(beats))) plt.title('Beat-synchronous Chroma (median aggregation)') plt.colorbar() plt.tight_layout()
Images/music/samples/libROSA/LibROSA_Demo.ipynb
scienceguyrob/Docker
gpl-3.0
<h1><center>Initialize Directories</center></h1> The following paths need to be changed for your filesystem. [HOME_PATH] is where the raw data, reduced data, and grizli outputs will be stored. [PATH_TO_CATS] is where the catalogs are stored and must include the following: ### reference mosaic image (e.g., goodss-F105W-astrodrizzle-v4.3_drz_sci.fits) ### segmentation map (e.g., Goods_S_plus_seg.fits) ### source catalog (e.g., goodss-F105W-astrodrizzle-v4.3_drz_sub_plus.cat) ### radec_catalog (e.g., goodsS_radec.cat) ### 3DHST Eazy Catalogs (e.g., goodss_3dhst.v4.1.cats/*) the [PATH_TO_CATS] files are available on the team archive: https://archive.stsci.edu/pub/clear_team/INCOMING/for_hackday/
field = 'GS1' ref_filter = 'F105W' HOME_PATH = '/Users/rsimons/Desktop/clear/for_hackday/%s'%field PATH_TO_CATS = '/Users/rsimons/Desktop/clear/Catalogs' # Create [HOME_PATH] and [HOME_PATH]/query_results directories if they do not already exist if not os.path.isdir(HOME_PATH): os.system('mkdir %s'%HOME_PATH) if not os.path.isdir(HOME_PATH + '/query_results'): os.system('mkdir %s/query_results'%HOME_PATH) # Move to the [HOME_PATH] directory os.chdir(HOME_PATH)
notebooks/grizli/grizli_retrieve_and_prep.ipynb
ivastar/clear
mit
<h1><center>Query MAST</center></h1> Run an initial query for all raw G102 data in the MAST archive from the proposal ID 14227 with a target name that includes the phrase 'GS1' (i.e., GS1 pointing of CLEAR).
# proposal_id = [14227] is CLEAR parent = query.run_query(box = None, proposal_id = [14227], instruments=['WFC3/IR', 'ACS/WFC'], filters = ['G102'], target_name = 'GS1')
notebooks/grizli/grizli_retrieve_and_prep.ipynb
ivastar/clear
mit
Next, find all G102 and G141 observations that overlap with the pointings found in the initial query.
# Find all G102 and G141 observations overlapping the parent query in the archive tabs = overlaps.find_overlaps(parent, buffer_arcmin=0.01, filters=['G102', 'G141'], instruments=['WFC3/IR','WFC3/UVIS','ACS/WFC'], close=False) footprint_fits_file = glob('*footprint.fits')[0] jtargname = footprint_fits_file.strip('_footprint.fits') # A list of the target names fp_fits = fits.open(footprint_fits_file) overlapping_target_names = set(fp_fits[1].data['target']) # Move the footprint figure files to $HOME_PATH/query_results/ so that they are not overwritten os.system('cp %s/%s_footprint.fits %s/query_results/%s_footprint_%s.fits'%(HOME_PATH, jtargname, HOME_PATH, jtargname, 'all_G102_G141')) os.system('cp %s/%s_footprint.npy %s/query_results/%s_footprint_%s.npy'%(HOME_PATH, jtargname, HOME_PATH, jtargname, 'all_G102_G141')) os.system('cp %s/%s_footprint.pdf %s/query_results/%s_footprint_%s.pdf'%(HOME_PATH, jtargname, HOME_PATH, jtargname, 'all_G102_G141')) os.system('cp %s/%s_info.dat %s/query_results/%s_info_%s.dat'%(HOME_PATH, jtargname, HOME_PATH, jtargname, 'all_G102_G141')) # Table summary of query tabs[0]
notebooks/grizli/grizli_retrieve_and_prep.ipynb
ivastar/clear
mit
<h1><center>Retrieve raw data from MAST</center></h1> We now have a list of G102 and G141 observations in the MAST archive that overlap with the GS1 pointing of CLEAR. For each, retrieve all associated RAW grism G102/G141 and direct imaging F098M/F105W/F125W/F140W data from MAST. **For GS1, the retrieval step takes about 30 minutes to run and requires 1.9 GB of space.
# Loop targ_name by targ_name for t, targ_name in enumerate(overlapping_target_names): if use_mquery: extra = {'target_name':targ_name} else: extra = query.DEFAULT_EXTRA.copy() extra += ["TARGET.TARGET_NAME LIKE '%s'"%targ_name] # search the MAST archive again, this time looking for # all grism and imaging observations with the given target name tabs = overlaps.find_overlaps(parent, buffer_arcmin=0.01, filters=['G102', 'G141', 'F098M', 'F105W', 'F125W', 'F140W'], instruments=['WFC3/IR','WFC3/UVIS','ACS/WFC'], extra=extra, close=False) if False: # retrieve raw data from MAST s3_status = os.system('aws s3 ls s3://stpubdata --request-payer requester') auto_script.fetch_files(field_root=jtargname, HOME_PATH=HOME_PATH, remove_bad=True, reprocess_parallel=True, s3_sync=(s3_status == 0)) # Move the figure files to $HOME_PATH/query_results/ so that they are not overwritten os.system('mv %s/%s_footprint.fits %s/query_results/%s_footprint_%s.fits'%(HOME_PATH, jtargname, HOME_PATH, jtargname, targ_name)) os.system('mv %s/%s_footprint.npy %s/query_results/%s_footprint_%s.npy'%(HOME_PATH, jtargname, HOME_PATH, jtargname, targ_name)) os.system('mv %s/%s_footprint.pdf %s/query_results/%s_footprint_%s.pdf'%(HOME_PATH, jtargname, HOME_PATH, jtargname, targ_name)) os.system('mv %s/%s_info.dat %s/query_results/%s_info_%s.dat'%(HOME_PATH, jtargname, HOME_PATH, jtargname, targ_name)) os.chdir(HOME_PATH)
notebooks/grizli/grizli_retrieve_and_prep.ipynb
ivastar/clear
mit
The following directories are created from auto_script.fetch_files: [HOME_PATH]/j0333m2742 [HOME_PATH]/j0333m2742/RAW [HOME_PATH]/j0333m2742/Prep [HOME_PATH]/j0333m2742/Extractions [HOME_PATH]/j0333m2742/Persistance RAW/ is where the downloaded raw and pre-processed data are stored. Prep/ is the general working directory for processing and analyses.
PATH_TO_RAW = glob(HOME_PATH + '/*/RAW')[0] PATH_TO_PREP = glob(HOME_PATH + '/*/PREP')[0] # Move to the Prep directory os.chdir(PATH_TO_PREP)
notebooks/grizli/grizli_retrieve_and_prep.ipynb
ivastar/clear
mit
Extract exposure information from downloaded flt files
# Find all pre-processed flt files in the RAW directory files = glob('%s/*flt.fits'%PATH_TO_RAW) # Generate a table from the headers of the flt fits files info = grizli.utils.get_flt_info(files)
notebooks/grizli/grizli_retrieve_and_prep.ipynb
ivastar/clear
mit
The info table includes relevant exposure details: e.g., filter, instrument, targetname, PA, RA, DEC. Print the first three rows of the table.
info[0:3]
notebooks/grizli/grizli_retrieve_and_prep.ipynb
ivastar/clear
mit
Next, we use grizli to parse the headers of the downloaded flt files in RAW/ and sort them into "visits". Each visit represents a specific pointing + orient + filter and contains the list of its associated exposure files.
# Parse the table and group exposures into associated "visits" visits, filters = grizli.utils.parse_flt_files(info=info, uniquename=True) # an F140W imaging visit print ('\n\n visits[0]\n\t product: ', visits[0]['product'], '\n\t files: ', visits[0]['files']) # a g141 grism visit print ('\n\n visits[1]\n\t product: ', visits[1]['product'], '\n\t files: ', visits[1]['files'])
notebooks/grizli/grizli_retrieve_and_prep.ipynb
ivastar/clear
mit
<h1><center>Pre-process raw data</center></h1> We are now ready to pre-process the raw data we downloaded from MAST. process_direct_grism_visit performs all of the necessary pre-processing: Copying the flt files from Raw/ to Prep/ Astrometric registration/correction Grism sky background subtraction and flat-fielding Extract visit-level catalogs and segmentation images from the direct imaging The final products are: Aligned, background-subtracted FLTS Drizzled mosaics of direct and grism images
if 'N' in field.upper(): radec_catalog = PATH_TO_CATS + '/goodsN_radec.cat' if 'S' in field.upper(): radec_catalog = PATH_TO_CATS + '/goodsS_radec.cat' product_names = np.array([visit['product'] for visit in visits]) filter_names = np.array([visit['product'].split('-')[-1] for visit in visits]) basenames = np.array([visit['product'].split('.')[0]+'.0' for visit in visits]) # First process the G102/F105W visits, then G141/F140W for ref_grism, ref_filter in [('G102', 'F105W'), ('G141', 'F140W')]: print ('Processing %s + %s visits'%(ref_grism, ref_filter)) for v, visit in enumerate(visits): product = product_names[v] basename = basenames[v] filt1 = filter_names[v] field_in_contest = basename.split('-')[0] if (ref_filter.lower() == filt1.lower()): #Found a direct image, now search for grism counterpart grism_index= np.where((basenames == basename) & (filter_names == ref_grism.lower()))[0][0] if True: # run the pre-process script status = process_direct_grism_visit(direct = visit, grism = visits[grism_index], radec = radec_catalog, align_mag_limits = [14, 23])
notebooks/grizli/grizli_retrieve_and_prep.ipynb
ivastar/clear
mit
<h1><center>Examining outputs from the pre-processing steps</center></h1> Astrometric Registration
os.chdir(PATH_TO_PREP) !cat gs1-cxt-09-227.0-f105w_wcs.log Image(filename = PATH_TO_PREP + '/gs1-cxt-09-227.0-f105w_wcs.png', width = 600, height = 600)
notebooks/grizli/grizli_retrieve_and_prep.ipynb
ivastar/clear
mit
Grism sky subtraction
os.chdir(PATH_TO_PREP) Image(filename = PATH_TO_PREP + '/gs1-cxt-09-227.0-g102_column.png', width = 600, height = 600)
notebooks/grizli/grizli_retrieve_and_prep.ipynb
ivastar/clear
mit
Data
lumo_fr1_typical = lumo[idx2_same] * 10**-22 lumo_fr2_typical = lumo[idx3_same] * 10**-22 mag_fr1_typical = mag_abs[idx2_same] mag_fr2_typical = mag_abs[idx3_same] lumo_fr1_like = lumo[idx_fr1] * 10**-22 lumo_fr2_like = lumo[idx_fr2] * 10**-22 mag_fr1_like = mag_abs[idx_fr1] mag_fr2_like = mag_abs[idx_fr2] mag_fr1 = np.hstack([mag_abs[idx_fr1], mag_abs[idx2_same]]) mag_fr2 = np.hstack([mag_abs[idx_fr2], mag_abs[idx3_same]]) lumo_fr1 = np.hstack([lumo[idx_fr1], lumo[idx2_same]]) * 10 ** -22 lumo_fr2 = np.hstack([lumo[idx_fr2], lumo[idx3_same]]) * 10 ** -22
code-sdss/SDSS_Analysis-KS-Chi2-tests.ipynb
myinxd/agn-ae
mit
Correlation analysis Pearson: http://blog.csdn.net/hjh00/article/details/48230399 p-value: https://stackoverflow.com/questions/22306341/python-sklearn-how-to-calculate-p-values Kolmogorov-Smirnov test: https://stackoverflow.com/questions/10884668/two-sample-kolmogorov-smirnov-test-in-python-scipy Scipy.stats.kstest: https://docs.scipy.org/doc/scipy-0.7.x/reference/generated/scipy.stats.kstest.html
import scipy.stats.stats as stats from sklearn.feature_selection import chi2 # ks test # https://docs.scipy.org/doc/scipy-0.7.x/reference/generated/scipy.stats.ks_2samp.html#scipy.stats.ks_2sam lumo_ks_D_t,lumo_ks_p_t = stats.ks_2samp(lumo_fr1_typical,lumo_fr2_typical) print("KS statistic of lumo: typical %.5f" % lumo_ks_D_t) print("P-value of lumo: typical %.5e" % lumo_ks_p_t) mag_ks_D_t,mag_ks_p_t = stats.ks_2samp(mag_fr1_typical,mag_fr2_typical) print("KS statistic of Mr: typical %.5f" % mag_ks_D_t) print("P-value of Mr: typical %.5e" % mag_ks_p_t) # FR like lumo_ks_D_l,lumo_ks_p_l = stats.ks_2samp(lumo_fr1_like,lumo_fr2_like) print("KS statistic of lumo: like %.5f" % lumo_ks_D_l) print("P-value of lumo: like %.5e" % lumo_ks_p_l) mag_ks_D_l,mag_ks_p_l = stats.ks_2samp(mag_fr1_like,mag_fr2_like) print("KS statistic of Mr: like %.5f" % mag_ks_D_l) print("P-value of Mr: like %.5e" % mag_ks_p_l) # FR lumo_ks_D,lumo_ks_p = stats.ks_2samp(lumo_fr1,lumo_fr2) print("KS statistic of lumo: %.5f" % lumo_ks_D) print("P-value of lumo: %.5e" % lumo_ks_p) mag_ks_D,mag_ks_p = stats.ks_2samp(mag_fr1,mag_fr2) print("KS statistic of Mr: %.5f" % mag_ks_D) print("P-value of Mr: %.5e" % mag_ks_p)
code-sdss/SDSS_Analysis-KS-Chi2-tests.ipynb
myinxd/agn-ae
mit
P-value非常小,而ks statistic数值较大,认为FRI/FRII有一定的可分性。即原假设FRI/FRII的射电光学和光度服从统一分布是错误的。 但是,mag的D值,相对来说较小,说明光学数据上可分性没有luminosity高 Chi
x_lumo = np.hstack((lumo_fr1,lumo_fr2)) x_lumo.shape x_lumo = np.log10(np.hstack((lumo_fr1,lumo_fr2))) x_mag = np.hstack((mag_fr1,mag_fr2)) x_lumo_norm = (x_lumo - x_lumo.min()) / (x_lumo.max() - x_lumo.min()) x_mag_norm = (x_mag - x_mag.min()) / (x_mag.max() - x_mag.min()) x = np.vstack([x_lumo_norm,x_mag_norm]) x = x.transpose() y = np.zeros(len(mag_abs)) y[idx2_same] = 1 y[idx_fr1] = 1 y[idx3_same] = 2 y[idx_fr2] = 2 y = y[np.where(y > 0)] scores, pvalues = chi2(x, y) pvalues from scipy.stats import chisquare chisquare(x_lumo_norm, y) np.random.seed(12222222) x = np.random.normal(0,1,size=(20000,)) y = np.random.normal(0,1,size=(20000,)) stats.ks_2samp(x,y)
code-sdss/SDSS_Analysis-KS-Chi2-tests.ipynb
myinxd/agn-ae
mit
$\epsilon$-greedy This strategy involves picking a (small) $\epsilon$ and then at any stage after every arm has been played at least once, explore with a probability of $\epsilon$, and exploit otherwise. For a suitable choice of $\epsilon_t$, $R_T = O(k \log T)$, which means that $\frac{R_T}{T} = O\left( \frac{\log T}{T} \right)$, which goes to 0 as T goes to $\infty$.
class EpsGreedy: def __init__(self, n_arms, eps=0): self.eps = eps self.n_arms = n_arms self.payoffs = np.zeros(n_arms) self.n_plays = np.zeros(n_arms) def play(self): # Note that the theory tells us to pick epsilon as O(1/t), not constant (which we use here). idx = np.argmin(self.n_plays) if self.n_plays[idx] == 0: return idx if np.random.rand() <= self.eps: return np.random.randint(self.n_arms) else: return np.argmax(self.payoffs / self.n_plays) def feedback(self, arm, reward): self.payoffs[arm] += reward self.n_plays[arm] += 1
bonus/tutorial-bandits.ipynb
AndreiBarsan/dm-notes
unlicense
UCB1 This algorithms keeps track of the upper confidence bound for every arm, and always picks the arm with the best upper confidence bound. At any point in time $t$, we know each arm's draw count $n_i^{(t)}$, as well as its average payoff $\hat{\mu}_i^{(t)}$. Based on this, we can compute every arm's upper confidence bound (or UCB): \begin{equation} \operatorname{UCB}(i) = \hat{\mu}_i + \sqrt{\frac{2\ln t}{n_i}} \end{equation} Also no-regret, just like $\epsilon$-greedy. The math is just a bit fluffier (see slide dm-11:20).
class UCB: def __init__(self, n_arms, tau): self.n_arms = n_arms self.means = np.zeros(n_arms) # Note that the UCB1 algorithm has tau=1. self.n_plays = np.zeros(n_arms) self.tau = tau self.t = 0 def play(self, plot=True): # If plot is true, it will plot the means + bounds every 100 iterations. self.t += 1 idx = np.argmin(self.n_plays) if self.n_plays[idx] == 0: return idx ub = self.tau * np.sqrt(2 * np.log(self.t) / self.n_plays) ucb = self.means + ub if plot and self.t % 100 == 0: plt.errorbar(list(range(self.n_arms)), self.means, yerr=ub) plt.show() print('chose arm', np.argmax(ucb)) return np.argmax(ucb) def feedback(self, arm, reward): self.n_plays[arm] += 1 self.means[arm] += 1 / (self.n_plays[arm]) * (reward - self.means[arm]) @interact(n_arms=(10, 100, 1), n_rounds=(100, 1000, 10), eps=(0, 1, .01) , tau=(0, 1, .01)) def run(n_arms, n_rounds, eps, tau): np.random.seed(123) # Initialize the arm payoffs. mu = np.random.randn(n_arms) # Some other strategies for sampling. # mu = np.random.standard_cauchy(n_arms) # mu = np.random.gamma(shape=.1, size=(n_arms, 1)) mu = np.abs(mu) mu /= np.max(mu) plt.bar(list(range(n_arms)), mu) plt.xlabel('arms') plt.ylabel('rewards') plt.show() bandits = { 'eps-{0}'.format(eps) : EpsGreedy(n_arms, eps=eps), 'ucb-{0}'.format(tau) : UCB(n_arms, tau=tau) } play(bandits, n_rounds, mu) # Hint: You can also plot the upper bound from UCB1 and see how tight it is.
bonus/tutorial-bandits.ipynb
AndreiBarsan/dm-notes
unlicense
Matrices The SymPy Matrix object helps us with small problems in linear algebra.
rot = Matrix([[r*cos(theta), -r*sin(theta)], [r*sin(theta), r*cos(theta)]]) rot
tutorial_exercises/04-Matrices.ipynb
leosartaj/scipy-2016-tutorial
bsd-3-clause
Standard methods
rot.det() rot.inv() rot.singular_values()
tutorial_exercises/04-Matrices.ipynb
leosartaj/scipy-2016-tutorial
bsd-3-clause
Exercise Find the inverse of the following Matrix: $$ \left[\begin{matrix}1 & x\y & 1\end{matrix}\right] $$
# Create a matrix and use the `inv` method to find the inverse
tutorial_exercises/04-Matrices.ipynb
leosartaj/scipy-2016-tutorial
bsd-3-clause
Operators The standard SymPy operators work on matrices.
rot * 2 rot * rot v = Matrix([[x], [y]]) v rot * v
tutorial_exercises/04-Matrices.ipynb
leosartaj/scipy-2016-tutorial
bsd-3-clause
Exercise In the last exercise you found the inverse of the following matrix
M = Matrix([[1, x], [y, 1]]) M M.inv()
tutorial_exercises/04-Matrices.ipynb
leosartaj/scipy-2016-tutorial
bsd-3-clause
Now verify that this is the true inverse by multiplying the matrix times its inverse. Do you get the identity matrix back?
# Multiply `M` by its inverse. Do you get back the identity matrix?
tutorial_exercises/04-Matrices.ipynb
leosartaj/scipy-2016-tutorial
bsd-3-clause
Exercise What are the eigenvectors and eigenvalues of M?
# Find the methods to compute eigenvectors and eigenvalues. Use these methods on `M`
tutorial_exercises/04-Matrices.ipynb
leosartaj/scipy-2016-tutorial
bsd-3-clause
NumPy-like Item access
rot[0, 0] rot[:, 0] rot[1, :]
tutorial_exercises/04-Matrices.ipynb
leosartaj/scipy-2016-tutorial
bsd-3-clause
Mutation We can change elements in the matrix.
rot[0, 0] += 1 rot simplify(rot.det()) rot.singular_values()
tutorial_exercises/04-Matrices.ipynb
leosartaj/scipy-2016-tutorial
bsd-3-clause
Exercise Play around with your matrix M, manipulating elements in a NumPy like way. Then try the various methods that we've talked about (or others). See what sort of answers you get.
# Play with matrices
tutorial_exercises/04-Matrices.ipynb
leosartaj/scipy-2016-tutorial
bsd-3-clause
End to End Workflow with ML Pipeline Generator <table align="left"> <td> <a href="https://colab.sandbox.google.com/github/GoogleCloudPlatform/ml-pipeline-generator-python/blob/master/examples/getting_started_notebook.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/ml-pipeline-generator-python/blob/master/examples/getting_started_notebook.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> Overview ML Pipeline Generator simplifies model building, training and deployment by generating the required training and deployment modules for your model. Using this tool, users with locally running scripts and notebooks can get started with AI Platform and Kubeflow Pipelines in a few steps, and will have the boilerplate code needed to customize their deployments and pipelines further. [Insert Pic] This demo shows you how to train and deploy Machine Learning models on a sample dataset. The demo is divided into two parts: Preparing an SVM classifier for training on Cloud AI platform Orchestrating the training of a Tensorflow model on Kubeflow Pipelines Dataset This tutorial uses the United States Census Income Dataset provided by the UC Irvine Machine Learning Repository containing information about people from a 1994 Census database, including age, education, marital status, occupation, and whether they make more than $50,000 a year. The dataset consists of over 30k rows, where each row corresponds to a different person. For a given row, there are 14 features that the model conditions on to predict the income of the person. A few of the features are named above, and the exhaustive list can be found both in the dataset link above. Set up your local development environment If you are using Colab or AI Platform Notebooks, your environment already meets all the requirements to run this notebook. If you are using AI Platform Notebook, make sure the machine configuration type is 1 vCPU, 3.75 GB RAM or above. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Google Cloud SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Google Cloud guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the Cloud SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate that environment and run pip install jupyter in a shell to install Jupyter. Run jupyter notebook in a shell to launch Jupyter. Open this notebook in the Jupyter Notebook Dashboard. Set up your GCP project If you do not have a GCP project then the following steps are required, regardless of your notebook environment. Select or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Create a GCP bucket so that we can store files. PIP install packages and dependencies Install addional dependencies not installed in Notebook environment Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
# Use the latest major GA version of the framework. ! pip install --upgrade ml-pipeline-gen PyYAML
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
Note: Try installing using sudo, if the above command throw any permission errors. Restart the kernel to allow the package to be imported for Jupyter Notebooks. Authenticate your GCP account If you are using AI Platform Notebooks, your environment is already authenticated. Skip this step. Only if you are on a local Juypter Notebook or Colab Environment, follow these steps: Create a New Service Account. Add the following roles: Compute Engine > Compute Admin, ML Engine > ML Engine Admin and Storage > Storage Object Admin. Download a JSON file that contains your key and it will be stored in your local environment.
# If you are on Colab, run this cell and upload your service account's # json key. import os import sys if 'google.colab' in sys.modules: from google.colab import files keyfile_upload = files.upload() keyfile = list(keyfile_upload.keys())[0] keyfile_path = os.path.abspath(keyfile) %env GOOGLE_APPLICATION_CREDENTIALS $keyfile_path ! gcloud auth activate-service-account --key-file $keyfile_path # If you are running this notebook locally, replace the string below # with the path to your service account key and run this cell # to authenticate your GCP account. %env GOOGLE_APPLICATION_CREDENTIALS /path/to/service/account ! gcloud auth activate-service-account --key-file '/path/to/service/account'
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
Before You Begin The tool requires following Google Cloud APIs to be enabled: * Google Cloud Storage * Cloud AI Platform * Google Kubernetes Engine Add your Project ID below, you can change the region below if you would like, but it is not a requirement.
PROJECT_ID = "[PROJECT-ID]" #@param {type:"string"} COMPUTE_REGION = "us-central1" # Currently only supported region.
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
Also add your bucket name:
BUCKET_NAME = "[BUCKET-ID]" #@param {type:"string"} !gcloud config set project {PROJECT_ID}
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
The tool requires following Google Cloud APIs to be enabled:
!gcloud services enable ml.googleapis.com \ compute.googleapis.com \ storage-component.googleapis.com
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
Create a model locally In this section we will create a model locally, which many users have. This section is done to illustrate the on-prem method of creating models and in the next section we will show how to train them on GCP so that you can leverage the benefits of the cloud like easy distributed training, paralllel hyperparameter tuning and fast, up-to-date accelerators. The next block of code highlights how we will preprocess the census data. It is out of scope for this colab to dive into how the code works. All that is important is that the function load_data returns 4 values: the training features, the training predictor, the evaluation features and the evaluation predictor in that order (this function also uploads data into GCS). Run the hidden cell below.
#@title # python3 # Copyright 2019 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Train a simple TF classifier for MNIST dataset. This example comes from the cloudml-samples keras demo. github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/tf-keras """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from six.moves import urllib import tempfile import numpy as np import pandas as pd import tensorflow.compat.v1 as tf DATA_DIR = os.path.join(tempfile.gettempdir(), "census_data") DATA_URL = ("https://storage.googleapis.com/cloud-samples-data/ai-platform" + "/census/data/") TRAINING_FILE = "adult.data.csv" EVAL_FILE = "adult.test.csv" TRAINING_URL = os.path.join(DATA_URL, TRAINING_FILE) EVAL_URL = os.path.join(DATA_URL, EVAL_FILE) _CSV_COLUMNS = [ "age", "workclass", "fnlwgt", "education", "education_num", "marital_status", "occupation", "relationship", "race", "gender", "capital_gain", "capital_loss", "hours_per_week", "native_country", "income_bracket", ] _LABEL_COLUMN = "income_bracket" UNUSED_COLUMNS = ["fnlwgt", "education", "gender"] _CATEGORICAL_TYPES = { "workclass": pd.api.types.CategoricalDtype(categories=[ "Federal-gov", "Local-gov", "Never-worked", "Private", "Self-emp-inc", "Self-emp-not-inc", "State-gov", "Without-pay" ]), "marital_status": pd.api.types.CategoricalDtype(categories=[ "Divorced", "Married-AF-spouse", "Married-civ-spouse", "Married-spouse-absent", "Never-married", "Separated", "Widowed" ]), "occupation": pd.api.types.CategoricalDtype([ "Adm-clerical", "Armed-Forces", "Craft-repair", "Exec-managerial", "Farming-fishing", "Handlers-cleaners", "Machine-op-inspct", "Other-service", "Priv-house-serv", "Prof-specialty", "Protective-serv", "Sales", "Tech-support", "Transport-moving" ]), "relationship": pd.api.types.CategoricalDtype(categories=[ "Husband", "Not-in-family", "Other-relative", "Own-child", "Unmarried", "Wife" ]), "race": pd.api.types.CategoricalDtype(categories=[ "Amer-Indian-Eskimo", "Asian-Pac-Islander", "Black", "Other", "White" ]), "native_country": pd.api.types.CategoricalDtype(categories=[ "Cambodia", "Canada", "China", "Columbia", "Cuba", "Dominican-Republic", "Ecuador", "El-Salvador", "England", "France", "Germany", "Greece", "Guatemala", "Haiti", "Holand-Netherlands", "Honduras", "Hong", "Hungary", "India", "Iran", "Ireland", "Italy", "Jamaica", "Japan", "Laos", "Mexico", "Nicaragua", "Outlying-US(Guam-USVI-etc)", "Peru", "Philippines", "Poland", "Portugal", "Puerto-Rico", "Scotland", "South", "Taiwan", "Thailand", "Trinadad&Tobago", "United-States", "Vietnam", "Yugoslavia" ]), "income_bracket": pd.api.types.CategoricalDtype(categories=[ "<=50K", ">50K" ]) } def _download_and_clean_file(filename, url): """Downloads data from url, and makes changes to match the CSV format. The CSVs may use spaces after the comma delimters (non-standard) or include rows which do not represent well-formed examples. This function strips out some of these problems. Args: filename: filename to save url to url: URL of resource to download """ temp_file, _ = urllib.request.urlretrieve(url) with tf.io.gfile.GFile(temp_file, "r") as temp_file_object: with tf.io.gfile.GFile(filename, "w") as file_object: for line in temp_file_object: line = line.strip() line = line.replace(", ", ",") if not line or "," not in line: continue if line[-1] == ".": line = line[:-1] line += "\n" file_object.write(line) tf.io.gfile.remove(temp_file) def download(data_dir): """Downloads census data if it is not already present. Args: data_dir: directory where we will access/save the census data Returns: foo """ tf.io.gfile.makedirs(data_dir) training_file_path = os.path.join(data_dir, TRAINING_FILE) if not tf.io.gfile.exists(training_file_path): _download_and_clean_file(training_file_path, TRAINING_URL) eval_file_path = os.path.join(data_dir, EVAL_FILE) if not tf.io.gfile.exists(eval_file_path): _download_and_clean_file(eval_file_path, EVAL_URL) return training_file_path, eval_file_path def upload(train_df, eval_df, train_path, eval_path): train_df.to_csv(os.path.join(os.path.dirname(train_path), TRAINING_FILE), index=False, header=False) eval_df.to_csv(os.path.join(os.path.dirname(eval_path), EVAL_FILE), index=False, header=False) def preprocess(dataframe): """Converts categorical features to numeric. Removes unused columns. Args: dataframe: Pandas dataframe with raw data Returns: Dataframe with preprocessed data """ dataframe = dataframe.drop(columns=UNUSED_COLUMNS) # Convert integer valued (numeric) columns to floating point numeric_columns = dataframe.select_dtypes(["int64"]).columns dataframe[numeric_columns] = dataframe[numeric_columns].astype("float32") # Convert categorical columns to numeric cat_columns = dataframe.select_dtypes(["object"]).columns dataframe[cat_columns] = dataframe[cat_columns].apply( lambda x: x.astype(_CATEGORICAL_TYPES[x.name])) dataframe[cat_columns] = dataframe[cat_columns].apply( lambda x: x.cat.codes) return dataframe def standardize(dataframe): """Scales numerical columns using their means and standard deviation. Args: dataframe: Pandas dataframe Returns: Input dataframe with the numerical columns scaled to z-scores """ dtypes = list(zip(dataframe.dtypes.index, map(str, dataframe.dtypes))) for column, dtype in dtypes: if dtype == "float32": dataframe[column] -= dataframe[column].mean() dataframe[column] /= dataframe[column].std() return dataframe def load_data(train_path="", eval_path=""): """Loads data into preprocessed (train_x, train_y, eval_y, eval_y) dataframes. Args: train_path: Local or GCS path to uploaded train data to. eval_path: Local or GCS path to uploaded eval data to. Returns: A tuple (train_x, train_y, eval_x, eval_y), where train_x and eval_x are Pandas dataframes with features for training and train_y and eval_y are numpy arrays with the corresponding labels. """ # Download Census dataset: Training and eval csv files. training_file_path, eval_file_path = download(DATA_DIR) train_df = pd.read_csv( training_file_path, names=_CSV_COLUMNS, na_values="?") eval_df = pd.read_csv(eval_file_path, names=_CSV_COLUMNS, na_values="?") train_df = preprocess(train_df) eval_df = preprocess(eval_df) # Split train and eval data with labels. The pop method copies and removes # the label column from the dataframe. train_x, train_y = train_df, train_df.pop(_LABEL_COLUMN) eval_x, eval_y = eval_df, eval_df.pop(_LABEL_COLUMN) # Join train_x and eval_x to normalize on overall means and standard # deviations. Then separate them again. all_x = pd.concat([train_x, eval_x], keys=["train", "eval"]) all_x = standardize(all_x) train_x, eval_x = all_x.xs("train"), all_x.xs("eval") # Rejoin features and labels and upload to GCS. if train_path and eval_path: train_df = train_x.copy() train_df[_LABEL_COLUMN] = train_y eval_df = eval_x.copy() eval_df[_LABEL_COLUMN] = eval_y upload(train_df, eval_df, train_path, eval_path) # Reshape label columns for use with tf.data.Dataset train_y = np.asarray(train_y).astype("float32").reshape((-1, 1)) eval_y = np.asarray(eval_y).astype("float32").reshape((-1, 1)) return train_x, train_y, eval_x, eval_y
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
Now we train the a sklearn SVM model on this data.
from sklearn import svm train_x, train_y, eval_x, eval_y = load_data() train_y, eval_y = [np.ravel(x) for x in [train_y, eval_y]] classifier = svm.SVC(C=1) classifier.fit(train_x, train_y) score = classifier.score(eval_x, eval_y) print('Accuracy is {}'.format(score))
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
Usually, the pipelines have more complexities to it, such as hyperparameter tuning. However, at the end we have a single model which is the best and which we want to serve in production. Preparing an SVM classifier for training on Cloud AI platform We now have a model which we think is good, but we want to add this model onto GCP while at the same time adding additional features such as training and prediction so future runs will be simple. We can leverage the examples that are in thie ML Pipeline Generator as they give good examples and templates to follow. So first we clone the github repo.
!git clone https://github.com/GoogleCloudPlatform/ml-pipeline-generator-python.git
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
Then we copy the sklearn example to the current directory and go into this folder.
!cp -r ml-pipeline-generator-python/examples/sklearn sklearn-demo %cd sklearn-demo
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
We now modify the config.yaml.example file with out project id, bucket id and model name. Note the training and evaluation data files should be stored in your bucket already, unless you decided to handle that upload in your preprocessing function (like in this lab).
%%writefile config.yaml # Copyright 2020 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Config file for ML Pipeline Generator. project_id: [PROJECT ID] bucket_id: [BUCKET ID] region: "us-central1" scale_tier: "STANDARD_1" runtime_version: "1.15" python_version: "3.7" package_name: "ml_pipeline_gen" machine_type_pred: "mls1-c4-m2" data: schema: - "age" - "workclass" - "education_num" - "marital_status" - "occupation" - "relationship" - "race" - "capital_gain" - "capital_loss" - "hours_per_week" - "native_country" - "income_bracket" train: "gs://[BUCKET ID]/[MODEL NAME]/data/adult.data.csv" evaluation: "gs://[BUCKET ID]/[MODEL NAME]/data/adult.test.csv" prediction: input_data_paths: - "gs://[BUCKET ID]/[MODEL NAME]/inputs/*" input_format: "JSON" output_format: "JSON" model: # Name must start with a letter and only contain letters, numbers, and # underscores. name: [MODEL NAME] path: "model.sklearn_model" target: "income_bracket" model_params: input_args: C: type: "float" help: "Regularization parameter, must be positive." default: 1.0 # Relative path. hyperparam_config: "hptuning_config.yaml"
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
We now copy our previous preoprocessing code into the file concensus_preprocess.py. Run the hidden cell below.
#@title %%writefile model/census_preprocess.py # python3 # Copyright 2019 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Train a simple TF classifier for MNIST dataset. This example comes from the cloudml-samples keras demo. github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/tf-keras """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from six.moves import urllib import tempfile import numpy as np import pandas as pd import tensorflow.compat.v1 as tf DATA_DIR = os.path.join(tempfile.gettempdir(), "census_data") DATA_URL = ("https://storage.googleapis.com/cloud-samples-data/ai-platform" + "/census/data/") TRAINING_FILE = "adult.data.csv" EVAL_FILE = "adult.test.csv" TRAINING_URL = os.path.join(DATA_URL, TRAINING_FILE) EVAL_URL = os.path.join(DATA_URL, EVAL_FILE) _CSV_COLUMNS = [ "age", "workclass", "fnlwgt", "education", "education_num", "marital_status", "occupation", "relationship", "race", "gender", "capital_gain", "capital_loss", "hours_per_week", "native_country", "income_bracket", ] _LABEL_COLUMN = "income_bracket" UNUSED_COLUMNS = ["fnlwgt", "education", "gender"] _CATEGORICAL_TYPES = { "workclass": pd.api.types.CategoricalDtype(categories=[ "Federal-gov", "Local-gov", "Never-worked", "Private", "Self-emp-inc", "Self-emp-not-inc", "State-gov", "Without-pay" ]), "marital_status": pd.api.types.CategoricalDtype(categories=[ "Divorced", "Married-AF-spouse", "Married-civ-spouse", "Married-spouse-absent", "Never-married", "Separated", "Widowed" ]), "occupation": pd.api.types.CategoricalDtype([ "Adm-clerical", "Armed-Forces", "Craft-repair", "Exec-managerial", "Farming-fishing", "Handlers-cleaners", "Machine-op-inspct", "Other-service", "Priv-house-serv", "Prof-specialty", "Protective-serv", "Sales", "Tech-support", "Transport-moving" ]), "relationship": pd.api.types.CategoricalDtype(categories=[ "Husband", "Not-in-family", "Other-relative", "Own-child", "Unmarried", "Wife" ]), "race": pd.api.types.CategoricalDtype(categories=[ "Amer-Indian-Eskimo", "Asian-Pac-Islander", "Black", "Other", "White" ]), "native_country": pd.api.types.CategoricalDtype(categories=[ "Cambodia", "Canada", "China", "Columbia", "Cuba", "Dominican-Republic", "Ecuador", "El-Salvador", "England", "France", "Germany", "Greece", "Guatemala", "Haiti", "Holand-Netherlands", "Honduras", "Hong", "Hungary", "India", "Iran", "Ireland", "Italy", "Jamaica", "Japan", "Laos", "Mexico", "Nicaragua", "Outlying-US(Guam-USVI-etc)", "Peru", "Philippines", "Poland", "Portugal", "Puerto-Rico", "Scotland", "South", "Taiwan", "Thailand", "Trinadad&Tobago", "United-States", "Vietnam", "Yugoslavia" ]), "income_bracket": pd.api.types.CategoricalDtype(categories=[ "<=50K", ">50K" ]) } def _download_and_clean_file(filename, url): """Downloads data from url, and makes changes to match the CSV format. The CSVs may use spaces after the comma delimters (non-standard) or include rows which do not represent well-formed examples. This function strips out some of these problems. Args: filename: filename to save url to url: URL of resource to download """ temp_file, _ = urllib.request.urlretrieve(url) with tf.io.gfile.GFile(temp_file, "r") as temp_file_object: with tf.io.gfile.GFile(filename, "w") as file_object: for line in temp_file_object: line = line.strip() line = line.replace(", ", ",") if not line or "," not in line: continue if line[-1] == ".": line = line[:-1] line += "\n" file_object.write(line) tf.io.gfile.remove(temp_file) def download(data_dir): """Downloads census data if it is not already present. Args: data_dir: directory where we will access/save the census data Returns: foo """ tf.io.gfile.makedirs(data_dir) training_file_path = os.path.join(data_dir, TRAINING_FILE) if not tf.io.gfile.exists(training_file_path): _download_and_clean_file(training_file_path, TRAINING_URL) eval_file_path = os.path.join(data_dir, EVAL_FILE) if not tf.io.gfile.exists(eval_file_path): _download_and_clean_file(eval_file_path, EVAL_URL) return training_file_path, eval_file_path def upload(train_df, eval_df, train_path, eval_path): train_df.to_csv(os.path.join(os.path.dirname(train_path), TRAINING_FILE), index=False, header=False) eval_df.to_csv(os.path.join(os.path.dirname(eval_path), EVAL_FILE), index=False, header=False) def preprocess(dataframe): """Converts categorical features to numeric. Removes unused columns. Args: dataframe: Pandas dataframe with raw data Returns: Dataframe with preprocessed data """ dataframe = dataframe.drop(columns=UNUSED_COLUMNS) # Convert integer valued (numeric) columns to floating point numeric_columns = dataframe.select_dtypes(["int64"]).columns dataframe[numeric_columns] = dataframe[numeric_columns].astype("float32") # Convert categorical columns to numeric cat_columns = dataframe.select_dtypes(["object"]).columns dataframe[cat_columns] = dataframe[cat_columns].apply( lambda x: x.astype(_CATEGORICAL_TYPES[x.name])) dataframe[cat_columns] = dataframe[cat_columns].apply( lambda x: x.cat.codes) return dataframe def standardize(dataframe): """Scales numerical columns using their means and standard deviation. Args: dataframe: Pandas dataframe Returns: Input dataframe with the numerical columns scaled to z-scores """ dtypes = list(zip(dataframe.dtypes.index, map(str, dataframe.dtypes))) for column, dtype in dtypes: if dtype == "float32": dataframe[column] -= dataframe[column].mean() dataframe[column] /= dataframe[column].std() return dataframe def load_data(train_path="", eval_path=""): """Loads data into preprocessed (train_x, train_y, eval_y, eval_y) dataframes. Args: train_path: Local or GCS path to uploaded train data to. eval_path: Local or GCS path to uploaded eval data to. Returns: A tuple (train_x, train_y, eval_x, eval_y), where train_x and eval_x are Pandas dataframes with features for training and train_y and eval_y are numpy arrays with the corresponding labels. """ # Download Census dataset: Training and eval csv files. training_file_path, eval_file_path = download(DATA_DIR) train_df = pd.read_csv( training_file_path, names=_CSV_COLUMNS, na_values="?") eval_df = pd.read_csv(eval_file_path, names=_CSV_COLUMNS, na_values="?") train_df = preprocess(train_df) eval_df = preprocess(eval_df) # Split train and eval data with labels. The pop method copies and removes # the label column from the dataframe. train_x, train_y = train_df, train_df.pop(_LABEL_COLUMN) eval_x, eval_y = eval_df, eval_df.pop(_LABEL_COLUMN) # Join train_x and eval_x to normalize on overall means and standard # deviations. Then separate them again. all_x = pd.concat([train_x, eval_x], keys=["train", "eval"]) all_x = standardize(all_x) train_x, eval_x = all_x.xs("train"), all_x.xs("eval") # Rejoin features and labels and upload to GCS. if train_path and eval_path: train_df = train_x.copy() train_df[_LABEL_COLUMN] = train_y eval_df = eval_x.copy() eval_df[_LABEL_COLUMN] = eval_y upload(train_df, eval_df, train_path, eval_path) # Reshape label columns for use with tf.data.Dataset train_y = np.asarray(train_y).astype("float32").reshape((-1, 1)) eval_y = np.asarray(eval_y).astype("float32").reshape((-1, 1)) return train_x, train_y, eval_x, eval_y
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
We perform a similar copy and paste into the sklearn_model.py file, with the addition of a parameter C which we will use for hyperparameter tuning. You can add as much hyperparameters as you requre to tune.
%%writefile model/sklearn_model.py # python3 # Copyright 2019 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Train a simple SVM classifier.""" import argparse import numpy as np from sklearn import svm from model.census_preprocess import load_data def get_model(params): """Trains a classifier.""" classifier = svm.SVC(C=params.C) return classifier
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
We now speify the hyperparameters for our training runs based on the hyperparameter tuning yaml format for CAIP.
%%writefile hptuning_config.yaml # Copyright 2020 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. trainingInput: scaleTier: STANDARD_1 hyperparameters: goal: MAXIMIZE maxTrials: 2 maxParallelTrials: 2 hyperparameterMetricTag: score enableTrialEarlyStopping: TRUE params: - parameterName: C type: DOUBLE minValue: .001 maxValue: 10 scaleType: UNIT_LOG_SCALE
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
Run the Sklearn Model on CAIP We only modified two yaml files and the demo.py file to specify training, hyperparameter tuning and model prediction. Then, we simply copied and pasted our existing code for preprocessing and building the model. We did not have to write any GCP specific code as yet, this will all be handled by this solution. Now we can submit our jobs to the cloud with a few commands
from ml_pipeline_gen.models import SklearnModel from model.census_preprocess import load_data
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
Specify the path of your config.yaml file
config = "config.yaml"
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
Now, we can easily create our model, generate all the necessary Cloud AI Platform files needed to train the model, upload the data files and train the model in 4 simple commands. Note, our load_data function uploads the files for us automatically, you can also manually upload the files to the buckets you specified in the config.yaml file.
model = SklearnModel(config) model.generate_files() # this fn is from out preprocessing file and # automatically uploads our data to GCS load_data(model.data["train"], model.data["evaluation"]) job_id = model.train(tune=True)
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
After training, we would like to test our model's prediction. First, deploy the model (our code automatically returns a generated version). Then request online predictions.
pred_input = [ [0.02599666, 6, 1.1365801, 4, 0, 1, 4, 0.14693314, -0.21713187, -0.034039237, 38], ] version = model.deploy(job_id=job_id) preds = model.online_predict(pred_input, version=version) print("Features: {}".format(pred_input)) print("Predictions: {}".format(preds))
examples/getting_started_notebook.ipynb
GoogleCloudPlatform/ml-pipeline-generator-python
apache-2.0
Build up command-line parameters so that we can call methods on our Classifier() object c
from argparse import Namespace ns = Namespace() ns.database = 'stoqs_september2013_t' ns.classifier='Decision_Tree' ns.inputs=['bbp700', 'fl700_uncorr'] ns.labels=['diatom', 'dino1', 'dino2', 'sediment'] ns.test_size=0.4 ns.train_size=0.4 ns.verbose=True c.args = ns
stoqs/contrib/notebooks/classify_data.ipynb
danellecline/stoqs
gpl-3.0
Load the labeled data, normalize, and and split into train and test sets (borrowing from classify.py's createClassifier() method)
from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split X, y = c.loadLabeledData('Labeled Plankton', classes=('diatom', 'sediment')) X = StandardScaler().fit_transform(X) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=c.args.test_size, train_size=c.args.train_size)
stoqs/contrib/notebooks/classify_data.ipynb
danellecline/stoqs
gpl-3.0
Setup plotting
%pylab inline import pylab as plt from matplotlib.colors import ListedColormap plt.rcParams['figure.figsize'] = (27, 3)
stoqs/contrib/notebooks/classify_data.ipynb
danellecline/stoqs
gpl-3.0
Plot classifier comparisons as in http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html
for i, (name, clf) in enumerate(c.classifiers.items()): x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 xx, yy = np.meshgrid(np.arange(x_min, x_max, .02), np.arange(y_min, y_max, .02)) cm = plt.cm.RdBu cm_bright = ListedColormap(['#FF0000', '#0000FF']) ax = plt.subplot(1, len(c.classifiers) + 1, i + 1) clf.fit(X_train, y_train) score = clf.score(X_test, y_test) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. if hasattr(clf, "decision_function"): Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()]) else: Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1] # Put the result into a color plot Z = Z.reshape(xx.shape) ax.contourf(xx, yy, Z, cmap=cm, alpha=.8) # Plot also the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright) # and testing points ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6) ax.set_xlim(xx.min(), xx.max()) ax.set_ylim(yy.min(), yy.max()) ax.set_xticks(()) ax.set_yticks(()) ax.set_title(name) ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'), size=15, horizontalalignment='right')
stoqs/contrib/notebooks/classify_data.ipynb
danellecline/stoqs
gpl-3.0
Networks of features based on co-occurrence The features module in the tethne.networks subpackage contains a few functions for generating networks of features based on co-occurrence.
from tethne.networks import features
Feature Co-Occurrence.ipynb
diging/tethne-notebooks
gpl-3.0
We can use index_feature() to tokenize the abstract into individual words.
corpus.index_feature('abstract', tokenize=lambda x: x.split(' '))
Feature Co-Occurrence.ipynb
diging/tethne-notebooks
gpl-3.0
Here are all of the papers whose abstracts contain the word 'arthropod':
abstractTerms = corpus.features['abstract'] abstractTerms.papers_containing('arthropod')
Feature Co-Occurrence.ipynb
diging/tethne-notebooks
gpl-3.0
The transform method allows us to transform the values from one featureset using a custom function. One popular transformation for wordcount data is the term frequency * inverse document frequency (tf*idf) transformation. tf*idf weights wordcounts for each document based on how frequent each word is in the rest of the corpus, and is supposed to bring to the foreground the words that are the most "important" for each document.
from math import log def tfidf(f, c, C, DC): """ Apply the term frequency * inverse document frequency transformation. """ tf = float(c) idf = log(float(len(abstractTerms.features))/float(DC)) return tf*idf corpus.features['abstracts_tfidf'] = abstractTerms.transform(tfidf)
Feature Co-Occurrence.ipynb
diging/tethne-notebooks
gpl-3.0
I can specify some other transformation by first defining a transformer function, and then passing it as an argument to transform. A transformer function should accept the following parameters, and return a single numerical value (int or float). | Parameter | Description | | --------- | ----------------------------------------------------------------- | | f | Representation of the feature (e.g. string). | | v | Value of the feature in the document (e.g. frequency). | | C | Value of the feature in the Corpus (e.g. global frequency). | | DC | Number of documents in which the feature occcurs. | For example:
def mytransformer(s, c, C, DC): """ Doubles the feature value and divides by the overall value in the Corpus. """ return c*2./(C)
Feature Co-Occurrence.ipynb
diging/tethne-notebooks
gpl-3.0
We can then pass transformer function to transform as the first positional argument.
corpus.features['abstracts_transformed'] = abstractTerms.transform(mytransformer)
Feature Co-Occurrence.ipynb
diging/tethne-notebooks
gpl-3.0
Here is the impact on the value for 'arthropod' in one document, using the two transformations above.
print 'Before: '.ljust(15), corpus.features['abstract'].features['WOS:000324532900018'].value('arthropod') print 'TF*IDF: '.ljust(15), corpus.features['abstracts_tfidf'].features['WOS:000324532900018'].value('arthropod') print 'mytransformer: '.ljust(15), corpus.features['abstracts_transformed'].features['WOS:000324532900018'].value('arthropod')
Feature Co-Occurrence.ipynb
diging/tethne-notebooks
gpl-3.0
We can also use transform() to remove words from our FeatureSet. For example, we can apply the NLTK stoplist and remove too-common or too-rare words:
from nltk.corpus import stopwords stoplist = stopwords.words() def apply_stoplist(f, v, c, dc): if f in stoplist or dc > 50 or dc < 3: return 0 return v corpus.features['abstracts_filtered'] = corpus.features['abstracts_tfidf'].transform(apply_stoplist) print 'Before: '.ljust(10), len(corpus.features['abstracts_tfidf'].index) print 'After: '.ljust(10), len(corpus.features['abstracts_filtered'].index)
Feature Co-Occurrence.ipynb
diging/tethne-notebooks
gpl-3.0
The mutual_information function in the features module generates a network based on the pointwise mutual information of each pair of features in a featureset. The first argument is a list of Papers, just like most other network-building functions. The second argument is the featureset that we wish to use.
MI_graph = features.mutual_information(corpus, 'abstracts_filtered', min_weight=0.7)
Feature Co-Occurrence.ipynb
diging/tethne-notebooks
gpl-3.0
Take a look at the ratio of nodes to edges to get a sense of how to tune the min_weight parameter. If you have an extremely high number of edges for the number of nodes, then you should probably increase min_weight to obtain a more legible network. Depending on your field, you may have some guidance from theory as well.
print 'This graph has {0} nodes and {1} edges'.format(MI_graph.order(), MI_graph.size())
Feature Co-Occurrence.ipynb
diging/tethne-notebooks
gpl-3.0
Once again, we'll use the GraphML writer to generate a visualizable network file.
from tethne.writers import graph mi_outpath = '/Users/erickpeirson/Projects/tethne-notebooks/output/mi_graph.graphml' graph.to_graphml(MI_graph, mi_outpath)
Feature Co-Occurrence.ipynb
diging/tethne-notebooks
gpl-3.0
If we'd done that, we would have seen on average about 6% return per year! That's over the average inflation of somewhere between 3-5 percent, so we're looking pretty good, right? Looking good? Well, yes and no. On the one hand, we did make money (at the expense of some risk, of course). But what if we'd chosen a better company to invest in like Apple? Or what if we'd invested instead in an index fund like Spyder?
fig = plt.figure() ax = reader["Adj Close", :, "SPY"].plot(label="SPY") ax = reader["Adj Close", :, "AAPL"].plot(label="AAPL", ax=ax) ax = reader["Adj Close", :, "GE"].plot(label="GE", ax=ax) ax.legend() ax.set_title("Stock Adjusted Closing Price") plt.savefig("img/close_price_3.png")
assets/notebooks/2017/10/07/.ipynb_checkpoints/active_portfolio_management_slides-checkpoint.ipynb
amniskin/amniskin.github.io
mit
Better yet! We can continue this thought process ad infinitum. For instance, we could've invested in Google. Or done something even crazier (a plot I won't show for simplicity reasons) -- volatility trading.
fig = plt.figure() ax = reader["Adj Close", :, "SPY"].plot(label="SPY") ax = reader["Adj Close", :, "AAPL"].plot(label="AAPL", ax=ax) ax = reader["Adj Close", :, "GOOG"].plot(label="GOOG", ax=ax) ax = reader["Adj Close", :, "GE"].plot(label="GE", ax=ax) ax.legend() ax.set_title("Stock Adjusted Closing Price") plt.savefig("img/close_price_all_4.png")
assets/notebooks/2017/10/07/.ipynb_checkpoints/active_portfolio_management_slides-checkpoint.ipynb
amniskin/amniskin.github.io
mit
On many occasions, you will want to search a string in your scripts: e.g. does the following word appear in a text? Is the format of the following email address valid and does it contain an @-symbol and a least one dot? To carry out such operations, the first thing you need is a string to search:
s = "In principio erat verbum, et verbum erat apud Deum."
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
The next thing we define is the actual regular expression which we will use, or the string that we will use to search the sentence we defined above. We pass this string to the compile() function in the re package, which will allow fast searching later on. Note that we put an r in front of this string when we initialize it, which turns our string into a so-called 'raw string'. While this is not always necessary, it is a good idea to do this consistently when dealing with regular expressions.
pattern = re.compile(r"verbum")
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
Next, we can call the sub() function from the re package on this pattern, in order to replace (or 'substitute') our pattern with another word, like this:
text = pattern.sub("XXX", s) print(text)
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
Note the order of the arguments passed to sub(): first, the word we would like to replace our pattern with, and secondly our original string. We can just as easily get back our original string:
pattern2 = re.compile(r"XXX") text = pattern2.sub("verbum", s) print(text)
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
So far nothing special: we are simply replacing one word for another word. The smart ones among you will have noticed that we could have achieved the exact same result using the replace() function, which we came across in an earlier chapter. But now: say you would like to replace all vowels in a string. With regular expressions, this is a piece of cake:
vowel_pattern = re.compile(r"a|e|o|u|i") without_vowels = vowel_pattern.sub("X", s) print(without_vowels)
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
Note how our pattern allows for a special syntax: the pipe symbol which we used allows to express that one character OR another one is fine for the regular expression to match. Oops: the capital letter at the beginning of the sentence hasn't been replaced because we only included lowercase vowels in our pattern definition. Let's add the uppercase vowels to the regex:
vowel_pattern = re.compile(r"a|A|e|E|o|O|u|U|i|I") without_vowels = vowel_pattern.sub("X", s) print(without_vowels)
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
There is in fact an easy way to match all lowercase and uppercase characters in a string, like this:
ups = re.compile(r"[A-Z]") lows = re.compile(r"[a-z]") without_ups = ups.sub("X", s) print(without_ups) without_ups = lows.sub("X", s) print(without_ups)
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
These specific patterns are called 'ranges': they will match any lowercase or uppercase letter. In fact, you can use such a range syntax using squared brackets, to replace the pipe syntax we used earlier.
vowel_pattern = re.compile(r"[aeoui]") without_vowels = vowel_pattern.sub("X", s) print(without_vowels)
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
You can also look for more specific, as well as longer letter groups by arranging them with round brackets:
p = re.compile(r"(ri)|(um)|(Th)") print(vowel_pattern.sub("X", s))
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
There is also a syntax to match any character (except the newline):
any_char = re.compile(r".") print(any_char.sub("X", s))
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
If you would like your expression to match an actual dot, you have to escape it using a backslash:
dot = re.compile(r"\.") print(dot.sub("X", s))
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
By the way, there exist more characters that you might have to escape using a backslash. This is because they are part of the syntax that use to define regular expressions: if you don't escape them, Python will not know that you are looking for an literal match. Characters that you typically might want to escape include: ( + ? . * ^ $ ( ) [ ] { } | \ ) ,. For example:
s = "In principio [erat] verbum, et verbum erat apud Deum." brackets_wrong = re.compile(r"[|]") print(brackets_wrong.sub("X", s)) brackets_right = re.compile(r"(\[)|(\])") print(brackets_right.sub("X", s))
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
The syntax for regular expression includes a whole range of possibilities which we simply cannot all deal with it here. Because of that we will stick to a number of helpful examples. An interesting feature is that you can specify whether or not a character really has to occur. You can check whether the pattern occurs in a string using the match() function which will return None if it doesn't find the pattern in the string searched:
pattern = re.compile(r"m{2,4}") print(pattern.match("")) print(pattern.match("m")) print(pattern.match("mm")) print(pattern.match("mmm")) print(pattern.match("mmmm")) print(pattern.match("mmmmm")) print(pattern.match("mmmmmm")) print(pattern.match("mmmmammm"))
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
With the curly brackets, you indicate that you are only interested in the letter 'm' if it occurs 2,3 or 4 times in a row in the string you search. Because None is returned if not a single match was found, you can use the outcome of match()in an if-statement. The following example shows how you can also use the curly brackets to match an exact number of occurences (in this case three a's).
pattern = re.compile(r"a{5}") if pattern.match("aaaaa"): print("Found it!") else: print("Nope...") # or: if pattern.match("aa"): print("Found it!") else: print("Nope...")
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
Using a plus sign you can indicate whether you want to match multiple occurrences of a character. A good example from the world of paper writing are double spaces, which can be hard to spot. In the code block below, we replace all multiple occurences of a whitespace character by a single whitespace character. Note that you can use the whitespace character just like any other character (you don't have to escape it). Multiple occurences of the whitespace character will be matched: it doesn't matter how many, as long as there is at least one:
paper = "My thesis on biology contains a lot of double spaces. I will remove them." mult = re.compile(r" +") print(mult.sub(" ", paper))
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
A similar piece of functionality is offered by the asterisk operator: here you can match multiple occurences of the same character in a row OR not a single one. Note the subtle difference with respect to the plus operator, which needs at least a single occurence of the character to match. Here we use the search() function which will search the entire string: the match() function which we used earlier will only look for matches at the very beginning of a string. Keep this in mind! The final pattern below yields a match, although there is not a single 'x' in the sentence. That is because the pattern with the asterisk says: "a single x, or no x at all".
s = "In English some letters occur multiple times in a row." p1 = re.compile(r"t") p2 = re.compile(r"t*") p3 = re.compile(r"x") p4 = re.compile(r"x*") print(p1.search(s)) print(p2.search(s)) print(p3.search(s)) print(p4.search(s))
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
Interestingly, you also use regular expression to search inside words. Can you explain why the following patterns (don't) match?
candidates = ["good", "god", "gud", "gd"] p = re.compile(r"go+d") for c in candidates: print(p.match(c))
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit
Speaking of words: it might be interesting to know that you can use regular expressions for advanced string splitting. If you want to split a sentence across all whitespace characters for instance, you can use an espaced \s. This operator will match all whitespace characters, such as tabs, linebreaks, normal spaces etc. If you add a + sign, your pattern will match series of whitespace characters:
s = """This is a text on three lines with multiple instances of double spaces.""" whitespace = re.compile(r"\s+") print(whitespace.split(s))
Chapter 6 - Regular Expressions.ipynb
mikekestemont/ghent1516
mit