markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Set up a recurrent Cloud Scheduler job for the Pub/Sub topic Read more about possible ways to create cron jobs here. Read about the cron job schedule format here.
scheduler_job_args = " ".join( [ SIMULATOR_SCHEDULER_JOB, f"--schedule='{SIMULATOR_SCHEDULE}'", f"--topic={SIMULATOR_PUBSUB_TOPIC}", f"--message-body={SIMULATOR_SCHEDULER_MESSAGE}", ] ) ! echo $scheduler_job_args ! gcloud scheduler jobs create pubsub $scheduler_job_args
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Define the Simulator logic in a Cloud Function to be triggered periodically, and deploy this Function Specify dependencies of the Function in src/simulator/requirements.txt. Read more about the available configurable arguments for deploying a Function here. For instance, based on the complexity of your Function, you may want to adjust its memory and timeout. Note that the environment variables in ENV_VARS should be comma-separated; there should not be additional spaces, or other characters in between. Read more about setting/updating/deleting environment variables here. Read more about sending predictions to Vertex endpoints here.
endpoints = ! gcloud ai endpoints list \ --region=$REGION \ --filter=display_name=$ENDPOINT_DISPLAY_NAME print("\n".join(endpoints), "\n") ENDPOINT_ID = endpoints[2].split(" ")[0] print(f"ENDPOINT_ID={ENDPOINT_ID}") ENV_VARS = ",".join( [ f"PROJECT_ID={PROJECT_ID}", f"REGION={REGION}", f"ENDPOINT_ID={ENDPOINT_ID}", f"RAW_DATA_PATH={RAW_DATA_PATH}", f"BATCH_SIZE={BATCH_SIZE}", f"RANK_K={RANK_K}", f"NUM_ACTIONS={NUM_ACTIONS}", ] ) ! echo $ENV_VARS ! gcloud functions deploy $SIMULATOR_CLOUD_FUNCTION \ --region=$REGION \ --trigger-topic=$SIMULATOR_PUBSUB_TOPIC \ --runtime=python37 \ --memory=512MB \ --timeout=200s \ --source=src/simulator \ --entry-point=simulate \ --stage-bucket=$BUCKET_NAME \ --update-env-vars=$ENV_VARS
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create the Logger to asynchronously log prediction inputs and results Create the Logger to get environment feedback as rewards from the MovieLens simulation environment based on prediction observations and predicted actions, formulate trajectory data, and store said data back to BigQuery. The Logger closes the RL feedback loop from prediction to training data, and allows re-training of the policy on new training data. The Logger is triggered by a hook in the prediction code. At each prediction request, the prediction code messages a Pub/Sub topic, which triggers the Logger code. The workflow is: prediction container code (at prediction request) --> Pub/Sub --> Cloud Functions (logging predictions back to BigQuery) In production, this Logger logic can be modified to that of gathering real-world feedback (rewards) based on observations and predicted actions. The Logger source code is src/logger/main.py. Run unit tests on the Logger
! python3 -m unittest src.logger.test_main
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Pub/Sub topic Read more about creating Pub/Sub topics here
! gcloud pubsub topics create $LOGGER_PUBSUB_TOPIC
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Define the Logger logic in a Cloud Function to be triggered by a Pub/Sub topic, which is triggered by the prediction code at each prediction request. Specify dependencies of the Function in src/logger/requirements.txt. Read more about the available configurable arguments for deploying a Function here. For instance, based on the complexity of your Function, you may want to adjust its memory and timeout. Note that the environment variables in ENV_VARS should be comma-separated; there should not be additional spaces, or other characters in between. Read more about setting/updating/deleting environment variables here.
ENV_VARS = ",".join( [ f"PROJECT_ID={PROJECT_ID}", f"RAW_DATA_PATH={RAW_DATA_PATH}", f"BATCH_SIZE={BATCH_SIZE}", f"RANK_K={RANK_K}", f"NUM_ACTIONS={NUM_ACTIONS}", f"BIGQUERY_TMP_FILE={BIGQUERY_TMP_FILE}", f"BIGQUERY_DATASET_ID={BIGQUERY_DATASET_ID}", f"BIGQUERY_LOCATION={BIGQUERY_LOCATION}", f"BIGQUERY_TABLE_ID={BIGQUERY_TABLE_ID}", ] ) ! echo $ENV_VARS ! gcloud functions deploy $LOGGER_CLOUD_FUNCTION \ --region=$REGION \ --trigger-topic=$LOGGER_PUBSUB_TOPIC \ --runtime=python37 \ --memory=512MB \ --timeout=200s \ --source=src/logger \ --entry-point=log \ --stage-bucket=$BUCKET_NAME \ --update-env-vars=$ENV_VARS
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create the Trigger to trigger re-training Create the Trigger to recurrently re-run the pipeline to re-train the policy on new training data, using kfp.v2.google.client.AIPlatformClient.create_schedule_from_job_spec. You create a pipeline for orchestration on Vertex Pipelines, and a Cloud Scheduler job that recurrently triggers the pipeline. The method also automatically creates a Cloud Function that acts as an intermediary between the Scheduler and Pipelines. You can find the source code here. When the Simulator sends prediction requests to the endpoint, the Logger is triggered by the hook in the prediction code to log prediction results to BigQuery, as new training data. As this pipeline has a recurrent schedule, it utlizes the new training data in training a new policy, therefore closing the feedback loop. Theoretically speaking, if you set the pipeline scheduler to be infinitely frequent, then you would be approaching real-time, continuous training.
TRIGGER_SCHEDULE = "*/30 * * * *" # Schedule to trigger the pipeline. Eg. "*/30 * * * *" means every 30 mins. ingest_op = load_component_from_url( "https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/62a2a7611499490b4b04d731d48a7ba87c2d636f/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/src/ingester/component.yaml" ) train_op = load_component_from_url( "https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/62a2a7611499490b4b04d731d48a7ba87c2d636f/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/src/trainer/component.yaml" ) @dsl.pipeline(pipeline_root=PIPELINE_ROOT, name=f"{PIPELINE_NAME}-retraining") def pipeline( # Pipeline configs project_id: str, training_artifacts_dir: str, # BigQuery configs bigquery_table_id: str, bigquery_max_rows: int = 10000, # TF-Agents RL configs rank_k: int = 20, num_actions: int = 20, num_epochs: int = 5, tikhonov_weight: float = 0.01, agent_alpha: float = 10, ) -> None: """Authors a re-training pipeline for MovieLens movie recommendation system. Integrates the Ingester, Trainer and Deployer components. Args: project_id: GCP project ID. This is required because otherwise the BigQuery client will use the ID of the tenant GCP project created as a result of KFP, which doesn't have proper access to BigQuery. training_artifacts_dir: Path to store the Trainer artifacts (trained policy). bigquery_table_id: A string of the BigQuery table ID in the format of "project.dataset.table". bigquery_max_rows: Optional; maximum number of rows to ingest. rank_k: Optional; rank for matrix factorization in the MovieLens environment; also the observation dimension. num_actions: Optional; number of actions (movie items) to choose from. num_epochs: Optional; number of training epochs. tikhonov_weight: Optional; LinUCB Tikhonov regularization weight of the Trainer. agent_alpha: Optional; LinUCB exploration parameter that multiplies the confidence intervals of the Trainer. """ # Run the Ingester component. ingest_task = ingest_op( project_id=project_id, bigquery_table_id=bigquery_table_id, bigquery_max_rows=bigquery_max_rows, tfrecord_file=TFRECORD_FILE, ) # Run the Trainer component and submit custom job to Vertex AI. # Convert the train_op component into a Vertex AI Custom Job pre-built component custom_job_training_op = utils.create_custom_training_job_op_from_component( component_spec=train_op, replica_count=TRAINING_REPLICA_COUNT, machine_type=TRAINING_MACHINE_TYPE, accelerator_type=TRAINING_ACCELERATOR_TYPE, accelerator_count=TRAINING_ACCELERATOR_COUNT, ) train_task = custom_job_training_op( training_artifacts_dir=training_artifacts_dir, tfrecord_file=ingest_task.outputs["tfrecord_file"], num_epochs=num_epochs, rank_k=rank_k, num_actions=num_actions, tikhonov_weight=tikhonov_weight, agent_alpha=agent_alpha, project=PROJECT_ID, location=REGION, ) # Run the Deployer components. # Upload the trained policy as a model. model_upload_op = gcc_aip.ModelUploadOp( project=project_id, display_name=TRAINED_POLICY_DISPLAY_NAME, artifact_uri=train_task.outputs["training_artifacts_dir"], serving_container_image_uri=f"gcr.io/{PROJECT_ID}/{PREDICTION_CONTAINER}:latest", ) # Create a Vertex AI endpoint. (This operation can occur in parallel with # the Generator, Ingester, Trainer components.) endpoint_create_op = gcc_aip.EndpointCreateOp( project=project_id, display_name=ENDPOINT_DISPLAY_NAME ) # Deploy the uploaded, trained policy to the created endpoint. (This operation # has to occur after both model uploading and endpoint creation complete.) gcc_aip.ModelDeployOp( endpoint=endpoint_create_op.outputs["endpoint"], model=model_upload_op.outputs["model"], deployed_model_display_name=TRAINED_POLICY_DISPLAY_NAME, dedicated_resources_machine_type=ENDPOINT_MACHINE_TYPE, dedicated_resources_accelerator_type=ENDPOINT_ACCELERATOR_TYPE, dedicated_resources_accelerator_count=ENDPOINT_ACCELERATOR_COUNT, dedicated_resources_min_replica_count=ENDPOINT_REPLICA_COUNT, ) # Compile the authored pipeline. compiler.Compiler().compile(pipeline_func=pipeline, package_path=PIPELINE_SPEC_PATH) # Createa Vertex AI client. api_client = AIPlatformClient(project_id=PROJECT_ID, region=REGION) # Schedule a recurring pipeline. response = api_client.create_schedule_from_job_spec( job_spec_path=PIPELINE_SPEC_PATH, schedule=TRIGGER_SCHEDULE, parameter_values={ # Pipeline configs "project_id": PROJECT_ID, "training_artifacts_dir": TRAINING_ARTIFACTS_DIR, # BigQuery config "bigquery_table_id": BIGQUERY_TABLE_ID, }, ) response["name"]
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial (you also need to clean up other resources that are difficult to delete here, such as the all/partial of data in BigQuery, the recurring pipeline and its Scheduler job, the uploaded policy/model, etc.):
# Delete endpoint resource. ! gcloud ai endpoints delete $ENDPOINT_ID --quiet --region $REGION # Delete Pub/Sub topics. ! gcloud pubsub topics delete $SIMULATOR_PUBSUB_TOPIC --quiet ! gcloud pubsub topics delete $LOGGER_PUBSUB_TOPIC --quiet # Delete Cloud Functions. ! gcloud functions delete $SIMULATOR_CLOUD_FUNCTION --quiet ! gcloud functions delete $LOGGER_CLOUD_FUNCTION --quiet # Delete Scheduler job. ! gcloud scheduler jobs delete $SIMULATOR_SCHEDULER_JOB --quiet # Delete Cloud Storage objects that were created. ! gsutil -m rm -r $PIPELINE_ROOT ! gsutil -m rm -r $TRAINING_ARTIFACTS_DIR
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Once we've grabbed the "Feature Collection" dataset, we can request a subset of the data:
# Can safely ignore the warnings ncss = metar_dataset.subset()
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
What variables do we have available?
ncss.variables
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
<a href="#top">Top</a> <hr style="height:2px;"> <a name="stationplot"></a> 2. Making a station plot Make new NCSS query Request data closest to a time
from datetime import datetime query = ncss.query() query.lonlat_box(north=34, south=24, east=-80, west=-90) query.time(datetime(2017, 9, 10, 12)) query.variables('temperature', 'dewpoint', 'altimeter_setting', 'wind_speed', 'wind_direction', 'sky_coverage') query.accept('csv') # Get the data data = ncss.get_data(query) data
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
Now we need to pull apart the data and perform some modifications, like converting winds to components and convert sky coverage percent to codes (octets) suitable for plotting.
import numpy as np import metpy.calc as mpcalc from metpy.units import units # Since we used the CSV data, this is just a dictionary of arrays lats = data['latitude'] lons = data['longitude'] tair = data['temperature'] dewp = data['dewpoint'] alt = data['altimeter_setting'] # Convert wind to components u, v = mpcalc.wind_components(data['wind_speed'] * units.knots, data['wind_direction'] * units.degree) # Need to handle missing (NaN) and convert to proper code cloud_cover = 8 * data['sky_coverage'] / 100. cloud_cover[np.isnan(cloud_cover)] = 10 cloud_cover = cloud_cover.astype(np.int) # For some reason these come back as bytes instead of strings stid = np.array([s.tostring().decode() for s in data['station']])
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
Create the map using cartopy and MetPy! One way to create station plots with MetPy is to create an instance of StationPlot and call various plot methods, like plot_parameter, to plot arrays of data at locations relative to the center point. In addition to plotting values, StationPlot has support for plotting text strings, symbols, and plotting values using custom formatting. Plotting symbols involves mapping integer values to various custom font glyphs in our custom weather symbols font. MetPy provides mappings for converting WMO codes to their appropriate symbol. The sky_cover function below is one such mapping.
%matplotlib inline import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.pyplot as plt from metpy.plots import StationPlot, sky_cover # Set up a plot with map features fig = plt.figure(figsize=(12, 12)) proj = ccrs.Stereographic(central_longitude=-95, central_latitude=35) ax = fig.add_subplot(1, 1, 1, projection=proj) ax.add_feature(cfeature.STATES, edgecolor='black') ax.coastlines(resolution='50m') ax.gridlines() # Create a station plot pointing to an Axes to draw on as well as the location of points stationplot = StationPlot(ax, lons, lats, transform=ccrs.PlateCarree(), fontsize=12) stationplot.plot_parameter('NW', tair, color='red') # Add wind barbs stationplot.plot_barb(u, v) # Plot the sky cover symbols in the center. We give it the integer code values that # should be plotted, as well as a mapping class that can convert the integer values # to the appropriate font glyph. stationplot.plot_symbol('C', cloud_cover, sky_cover)
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
Notice how there are so many overlapping stations? There's a utility in MetPy to help with that: reduce_point_density. This returns a mask we can apply to data to filter the points.
# Project points so that we're filtering based on the way the stations are laid out on the map proj = ccrs.Stereographic(central_longitude=-95, central_latitude=35) xy = proj.transform_points(ccrs.PlateCarree(), lons, lats) # Reduce point density so that there's only one point within a 200km circle mask = mpcalc.reduce_point_density(xy, 200000)
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
Now we just plot with arr[mask] for every arr of data we use in plotting.
# Set up a plot with map features fig = plt.figure(figsize=(12, 12)) ax = fig.add_subplot(1, 1, 1, projection=proj) ax.add_feature(cfeature.STATES, edgecolor='black') ax.coastlines(resolution='50m') ax.gridlines() # Create a station plot pointing to an Axes to draw on as well as the location of points stationplot = StationPlot(ax, lons[mask], lats[mask], transform=ccrs.PlateCarree(), fontsize=12) stationplot.plot_parameter('NW', tair[mask], color='red') stationplot.plot_barb(u[mask], v[mask]) stationplot.plot_symbol('C', cloud_cover[mask], sky_cover)
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
More examples for MetPy Station Plots: - MetPy Examples - MetPy Symbol list <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Modify the station plot (reproduced below) to include dewpoint, altimeter setting, as well as the station id. The station id can be added using the `plot_text` method on `StationPlot`.</li> <li>Re-mask the data to be a bit more finely spaced, say: 75km</li> <li>Bonus Points: Use the `formatter` argument to `plot_parameter` to only plot the 3 significant digits of altimeter setting. (Tens, ones, tenths)</li> </ul> </div>
# Use reduce_point_density # Set up a plot with map features fig = plt.figure(figsize=(12, 12)) ax = fig.add_subplot(1, 1, 1, projection=proj) ax.add_feature(cfeature.STATES, edgecolor='black') ax.coastlines(resolution='50m') ax.gridlines() # Create a station plot pointing to an Axes to draw on as well as the location of points # Plot dewpoint # Plot altimeter setting--formatter can take a function that formats values # Plot station id # %load solutions/reduce_density.py
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
<a href="#top">Top</a> <hr style="height:2px;"> <a name="timeseries"></a> 3. Time Series request and plot Let's say we want the past days worth of data... ...for Boulder (i.e. the lat/lon) ...for the variables mean sea level pressure, air temperature, wind direction, and wind_speed
from datetime import timedelta # define the time range we are interested in end_time = datetime(2017, 9, 12, 0) start_time = end_time - timedelta(days=2) # build the query query = ncss.query() query.lonlat_point(-80.25, 25.8) query.time_range(start_time, end_time) query.variables('altimeter_setting', 'temperature', 'dewpoint', 'wind_direction', 'wind_speed') query.accept('csv')
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
Let's get the data!
data = ncss.get_data(query) print(list(data.keys()))
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
What station did we get?
station_id = data['station'][0].tostring() print(station_id)
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
That indicates that we have a Python bytes object, containing the 0-255 values corresponding to 'K', 'M', 'I', 'A'. We can decode those bytes into a string:
station_id = station_id.decode('ascii') print(station_id)
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
Let's get the time into datetime objects. We see we have an array with byte strings in it, like station id above.
data['time']
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
So we can use a list comprehension to turn this into a list of date time objects:
time = [datetime.strptime(s.decode('ascii'), '%Y-%m-%dT%H:%M:%SZ') for s in data['time']]
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
Now for the obligatory time series plot...
from matplotlib.dates import DateFormatter, AutoDateLocator fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(time, data['wind_speed'], color='tab:blue') ax.set_title(f'Site: {station_id} Date: {time[0]:%Y/%m/%d}') ax.set_xlabel('Hour of day') ax.set_ylabel('Wind Speed') ax.grid(True) # Improve on the default ticking locator = AutoDateLocator() hoursFmt = DateFormatter('%H') ax.xaxis.set_major_locator(locator) ax.xaxis.set_major_formatter(hoursFmt)
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
<div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Pick a different location</li> <li>Plot temperature and dewpoint together on the same plot</li> </ul> </div>
# Your code goes here # %load solutions/time_series.py
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
Unidata/unidata-python-workshop
mit
Download a few Micro-C datasets, processed using distiller (https://github.com/mirnylab/distiller-nf), binned to 2048bp, and iteratively corrected.
if not os.path.exists('./data/coolers'): os.mkdir('./data/coolers') if not os.path.isfile('./data/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool'): subprocess.call('curl -o ./data/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool'+ ' https://storage.googleapis.com/basenji_hic/tutorials/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool', shell=True) subprocess.call('curl -o ./data/coolers/H1hESC_hg38_4DNFI1O6IL1Q.mapq_30.2048.cool'+ ' https://storage.googleapis.com/basenji_hic/tutorials/coolers/H1hESC_hg38_4DNFI1O6IL1Q.mapq_30.2048.cool', shell=True) ls ./data/coolers/
manuscripts/akita/tutorial.ipynb
calico/basenji
apache-2.0
Write out these cooler files and labels to a samples table.
lines = [['index','identifier','file','clip','sum_stat','description']] lines.append(['0', 'HFF', './data/coolers/HFF_hg38_4DNFIP5EUOFX.mapq_30.2048.cool', '2', 'sum', 'HFF']) lines.append(['1', 'H1hESC', './data/coolers/H1hESC_hg38_4DNFI1O6IL1Q.mapq_30.2048.cool', '2', 'sum', 'H1hESC']) samples_out = open('data/microc_cools.txt', 'w') for line in lines: print('\t'.join(line), file=samples_out) samples_out.close()
manuscripts/akita/tutorial.ipynb
calico/basenji
apache-2.0
Next, we want to choose genomic sequences to form batches for stochastic gradient descent, divide them into training/validation/test sets, and construct TFRecords to provide to downstream programs. The script akita_data.py implements this procedure. The most relevant options here are: | Option/Argument | Value | Note | |:---|:---|:---| | --sample | 0.1 | Down-sample the genome to 10% to speed things up here. | | -g | data/hg38_gaps_binsize2048_numconseq10.bed | Dodge large-scale unmappable regions determined from filtered cooler bins. | | -l | 1048576 | Sequence length. | | --crop | 65536 | Crop edges of matrix so loss is only computed over the central region. | | --local | True | Run locally, as opposed to on a SLURM scheduler. | | -o | data/1m | Output directory | | -p | 8 | Uses multiple concourrent processes to read/write. | | -t | .1 | Hold out 10% sequences for testing. | | -v | .1 | Hold out 10% sequences for validation. | | -w | 2048 | Pool the nucleotide-resolution values to 2048 bp bins. | | fasta_file| data/hg38.ml.fa | FASTA file to extract sequences from. | | targets_file | data/microc_cools.txt | Target table with cooler paths. | Note: make sure to export BASENJIDIR as outlined in the basenji installation tips (https://github.com/calico/basenji/tree/master/#installation).
if os.path.isdir('data/1m'): shutil.rmtree('data/1m') ! akita_data.py --sample 0.05 -g ./data/hg38_gaps_binsize2048_numconseq10.bed -l 1048576 --crop 65536 --local -o ./data/1m --as_obsexp -p 8 -t .1 -v .1 -w 2048 --snap 2048 --stride_train 262144 --stride_test 32768 ./data/hg38.ml.fa ./data/microc_cools.txt
manuscripts/akita/tutorial.ipynb
calico/basenji
apache-2.0
The data for training is now saved in data/1m as tfrecords (for training, validation, and testing), where contigs.bed contains the original large contiguous regions from which training sequences were taken, and sequences.bed contains the train/valid/test sequences.
! cut -f4 data/1m/sequences.bed | sort | uniq -c ! head -n3 data/1m/sequences.bed
manuscripts/akita/tutorial.ipynb
calico/basenji
apache-2.0
Now train a model! (Note: for training production-level models, please remove the --sample option when generating tfrecords)
# specify model parameters json to have only two targets params_file = './params.json' with open(params_file) as params_file: params_tutorial = json.load(params_file) params_tutorial['model']['head_hic'][-1]['units'] =2 with open('./data/1m/params_tutorial.json','w') as params_tutorial_file: json.dump(params_tutorial,params_tutorial_file) ### note that training with default parameters requires GPU with >12Gb RAM ### ! akita_train.py -k -o ./data/1m/train_out/ ./data/1m/params_tutorial.json ./data/1m/
manuscripts/akita/tutorial.ipynb
calico/basenji
apache-2.0
Magics! % and %% magics interact embed image embed links, youtube link notebooks Check out http://matplotlib.org/gallery.html select your favorite.
%%bash for num in {1..5} do for infile in *; do echo $num $infile done wc $infile done print "hi" !pwd !ping google.com this_is_magic = "Can you believe you can pass variables and strings like this?" hey = !echo $this_is_magic hey
notebooks/04-More_basics.ipynb
balmandhunter/jupyter-tips-and-tricks
mit
Numpy If you have arrays of numbers, use numpy or pandas (built on numpy) to represent the data. Tons of very fast underlying code.
x = np.arange(10000) print x # smart printing print x[0] # first element print x[-1] # last element print x[0:5] # first 5 elements (also x[:5]) print x[:] # "Everything" print x[-5:] # last five elements print x[-5:-2] print x[-5:-1] # not final value -- not inclusive on right x = np.random.randint(5, 5000, (3, 5)) x np.sum(x) x.sum() np.sum(x) np.sum(x, axis=0) np.sum(x, axis=1) x.sum(axis=1) # Multi dimension array slice with a comma x[:, 2] y = np.linspace(10, 20, 11) y np.linspace? np.linspace() # shift-tab; shift-tab-tab np. def does_it(first=x, second=y): """This is my doc""" pass y[[3, 5, 7]] does_it() num = 3000 x = np.linspace(1.0, 300.0, num) y = np.random.rand(num) z = np.sin(x) np.savetxt("example.txt", np.transpose((x, y, z))) %less example.txt !wc example.txt !head example.txt #Not a good idea a = [] b = [] for line in open("example.txt", 'r'): a.append(line[0]) b.append(line[2]) a[:10] # Whoops! a = [] b = [] for line in open("example.txt", 'r'): line = line.split() a.append(line[0]) b.append(line[2]) a[:10] # Strings! a = [] b = [] for line in open("example.txt", 'r'): line = line.split() a.append(float(line[0])) b.append(float(line[2])) a[:10] # Lists! # Do this! a, b = np.loadtxt("example.txt", unpack=True, usecols=(0,2)) a
notebooks/04-More_basics.ipynb
balmandhunter/jupyter-tips-and-tricks
mit
Matplotlib and Numpy
from numpy.random import randn num = 50 x = np.linspace(2.5, 300, num) y = randn(num) plt.scatter(x, y) y > 1 y[y > 1] y[(y < 1) & (y > -1)] plt.scatter(x, y, c='b', s=50) plt.scatter(x[(y < 1) & (y > -1)], y[(y < 1) & (y > -1)], c='r', s=50) y[~((y < 1) & (y > -1))] = 1.0 plt.scatter(x, y, c='b') plt.scatter(x, np.clip(y, -0.5, 0.5), color='red') num = 350 slope = 0.3 x = randn(num) * 50. + 150.0 y = randn(num) * 5 + x * slope plt.scatter(x, y, c='b') # plt.scatter(x[(y < 1) & (y > -1)], y[(y < 1) & (y > -1)], c='r') # np.argsort, np.sort, complicated index slicing dframe = pd.DataFrame({'x': x, 'y': y}) g = sns.jointplot('x', 'y', data=dframe, kind="reg")
notebooks/04-More_basics.ipynb
balmandhunter/jupyter-tips-and-tricks
mit
Grab Python version of ggplot http://ggplot.yhathq.com/
from ggplot import ggplot, aes, geom_line, stat_smooth, geom_dotplot, geom_point ggplot(aes(x='x', y='y'), data=dframe) + geom_point() + stat_smooth(colour='blue', span=0.2)
notebooks/04-More_basics.ipynb
balmandhunter/jupyter-tips-and-tricks
mit
Convolutional Neural Network (CNN) <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/images/cnn"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/cnn.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/cnn.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/cnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This tutorial demonstrates training a simple Convolutional Neural Network (CNN) to classify CIFAR images. Because this tutorial uses the Keras Sequential API, creating and training your model will take just a few lines of code. Import TensorFlow
import tensorflow as tf from tensorflow.keras import datasets, layers, models import matplotlib.pyplot as plt
site/en/tutorials/images/cnn.ipynb
tensorflow/docs
apache-2.0
Download and prepare the CIFAR10 dataset The CIFAR10 dataset contains 60,000 color images in 10 classes, with 6,000 images in each class. The dataset is divided into 50,000 training images and 10,000 testing images. The classes are mutually exclusive and there is no overlap between them.
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data() # Normalize pixel values to be between 0 and 1 train_images, test_images = train_images / 255.0, test_images / 255.0
site/en/tutorials/images/cnn.ipynb
tensorflow/docs
apache-2.0
Verify the data To verify that the dataset looks correct, let's plot the first 25 images from the training set and display the class name below each image:
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i]) # The CIFAR labels happen to be arrays, # which is why you need the extra index plt.xlabel(class_names[train_labels[i][0]]) plt.show()
site/en/tutorials/images/cnn.ipynb
tensorflow/docs
apache-2.0
Create the convolutional base The 6 lines of code below define the convolutional base using a common pattern: a stack of Conv2D and MaxPooling2D layers. As input, a CNN takes tensors of shape (image_height, image_width, color_channels), ignoring the batch size. If you are new to these dimensions, color_channels refers to (R,G,B). In this example, you will configure your CNN to process inputs of shape (32, 32, 3), which is the format of CIFAR images. You can do this by passing the argument input_shape to your first layer.
model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu'))
site/en/tutorials/images/cnn.ipynb
tensorflow/docs
apache-2.0
Let's display the architecture of your model so far:
model.summary()
site/en/tutorials/images/cnn.ipynb
tensorflow/docs
apache-2.0
Above, you can see that the output of every Conv2D and MaxPooling2D layer is a 3D tensor of shape (height, width, channels). The width and height dimensions tend to shrink as you go deeper in the network. The number of output channels for each Conv2D layer is controlled by the first argument (e.g., 32 or 64). Typically, as the width and height shrink, you can afford (computationally) to add more output channels in each Conv2D layer. Add Dense layers on top To complete the model, you will feed the last output tensor from the convolutional base (of shape (4, 4, 64)) into one or more Dense layers to perform classification. Dense layers take vectors as input (which are 1D), while the current output is a 3D tensor. First, you will flatten (or unroll) the 3D output to 1D, then add one or more Dense layers on top. CIFAR has 10 output classes, so you use a final Dense layer with 10 outputs.
model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10))
site/en/tutorials/images/cnn.ipynb
tensorflow/docs
apache-2.0
Here's the complete architecture of your model:
model.summary()
site/en/tutorials/images/cnn.ipynb
tensorflow/docs
apache-2.0
The network summary shows that (4, 4, 64) outputs were flattened into vectors of shape (1024) before going through two Dense layers. Compile and train the model
model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels))
site/en/tutorials/images/cnn.ipynb
tensorflow/docs
apache-2.0
Evaluate the model
plt.plot(history.history['accuracy'], label='accuracy') plt.plot(history.history['val_accuracy'], label = 'val_accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.ylim([0.5, 1]) plt.legend(loc='lower right') test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print(test_acc)
site/en/tutorials/images/cnn.ipynb
tensorflow/docs
apache-2.0
Turing machine computation Tape We will represent the tape as a list of tape symbols and we will represent tape symbols as Python strings. The string ' ' represents the blank symbol. The string '|&gt;' represents the start symbol, which indicates the beginning of the tape. States We will also encode states as Python strings. The string 'start' represents that start state. The strings 'accept', 'reject', and 'halt' represent final states of the machine, that indicate acceptance, rejection, and halting, respectively. Simulation The following function simulates a given Turing machine for a given number of steps on a given input
def run(transitions, input, steps): """simulate Turing machine for the given number of steps and the given input""" # convert input from string to list of symbols # we use '|>' as a symbol to indicate the beginning of the tape input = ['|>'] + list(input) + [' '] # sanitize transitions for 'accept' and 'reject' states and for symbol '|>' transitions = sanitize_transitions(transitions) # create initial configuration c = Configuration(state='start', head=1, tape=input) for i in range(0, steps): # read tape content under head current = c.state read = c.tape[c.head] # lookup transition based on state and read symbol next, write, move = transitions(current, read) # update configuration c.state = next c.tape[c.head] = write c.head += move if c.head >= len(c.tape): c.tape += [' '] # return final configuration return c
turing/turing.ipynb
dsteurer/cs4814fa15
mit
The following function checks that the transition functions satisfies some simple syntactic requirements (don't move to the left of the start symbol, don't remove or add start symbols, don't change state after accepting, rejecting, or halting.)
def check_transitions(transitions, states, alphabet): transitions = sanitize_transitions(transitions) for current in states: for read in alphabet: next, write, move = transitions(current, read) # we either stay in place or move one position # to the left or right assert(move in [-1,0,1]) # if we read the begin symbol, if read == '|>': # we need to write it back assert(write == '|>') # we need to move to the right assert(move == 1) else: # we cannot write the begin symbol assert(write != '|>') # if we are in one of the final states if current in ['accept', 'reject', 'halt']: # we cannot change to a different state assert(next == current) print("transition checks passed")
turing/turing.ipynb
dsteurer/cs4814fa15
mit
Examples Copy machine The following Turing machine copies its input, i.e., it computes the function $f(x)=xx$. The actual implementation uses different versions of the '0' and '1' symbol (called '0-read', '0-write' and '1-read', '1-write') in the two copies of the string $x$. We could replace those by regular '0' and '1' symbols by sweeping once more over the tape before the end of the computation.
def transitions_copy(current, read): if read == '|>': return 'start', read, 1 elif current == 'start': if 'write' not in read: return read + '-write', read + '-read', 1 else: return 'accept', read, 1 elif 'write' in current: if read != ' ': return current, read, 1 else: return 'rewind', current, -1 elif current == 'rewind': if 'read' not in read: return current, read, -1 else: return 'start', read, 1
turing/turing.ipynb
dsteurer/cs4814fa15
mit
Here is the full transitions function table of the machine:
transitions_table(transitions_copy, ['start', '0-write', '1-write', 'rewind'], ['0', '1', '0-read', '1-read', '0-write', '1-write'])
turing/turing.ipynb
dsteurer/cs4814fa15
mit
Here is an interactive simulation of the copy Turing machine (requires that ipython notebook is run locally). You can either click on the simulate button to view the computation during a given range of steps or you can drag the current step slider to view the configuration of the machine at a particular step. (If you click on the current step slider, you can also change it using the arrow keys.)
simulate(transitions_copy, input='10011', unary=False)
turing/turing.ipynb
dsteurer/cs4814fa15
mit
Power-of-2 machine The following Turing machine determines if the input is the unary encoding of a power of 2. Furthermore, given any string $1^n$, it outputs a string of the form ${0,1}^n2^i$, where $i$ is the largest number such that $2^i$ divides $n$.
def transitions_power(current,read): if read == '|>': return 'start', read, 1; elif current == 'rewind': return current, read, -1 elif read == 'x': return current, read, 1 elif current == 'start': if read != '1': return 'reject', read, 1 else: return 'start-even', read, 1 elif 'even' in current and read == '1': return 'odd', 'x', 1 elif current == 'odd' and read == '1': return 'even', read, 1 elif current == 'odd': if read == ' ': return 'rewind', '2', -1 else: return current, read, 1 elif current == 'start-even' and read != '1': return 'accept', read, -1 elif current == 'even' and read != '1': return 'reject', read, -1
turing/turing.ipynb
dsteurer/cs4814fa15
mit
Here is the full transition function table of the Turing machine:
transitions_table(transitions_power, ['start', 'start-even', 'even', 'odd', 'rewind'], ['0', '1', 'x', ' ', '|>'])
turing/turing.ipynb
dsteurer/cs4814fa15
mit
Here is an interactive simulation of the power Turing machine (requires that ipython notebook is run locally). You can either click on the simulate button to view the computation during a given range of steps or you can drag the current step slider to view the configuration of the machine at a particular step. (If you click on the current step slider, you can also change it using the arrow keys.)
simulate(transitions_power, input_unary=16, step_to=200, unary=True)
turing/turing.ipynb
dsteurer/cs4814fa15
mit
Whitening evoked data with a noise covariance Evoked data are loaded and then whitened using a given noise covariance matrix. It's an excellent quality check to see if baseline signals match the assumption of Gaussian white noise during the baseline period. Covariance estimation and diagnostic plots are based on :footcite:EngemannGramfort2015. References .. footbibliography::
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # Denis A. Engemann <denis.engemann@gmail.com> # # License: BSD-3-Clause import mne from mne import io from mne.datasets import sample from mne.cov import compute_covariance print(__doc__)
stable/_downloads/64e3b6395952064c08d4ff33d6236ff3/evoked_whitening.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
data_path = sample.data_path() meg_path = data_path / 'MEG' / 'sample' raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif' event_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif' raw = io.read_raw_fif(raw_fname, preload=True) raw.filter(1, 40, n_jobs=1, fir_design='firwin') raw.info['bads'] += ['MEG 2443'] # bads + 1 more events = mne.read_events(event_fname) # let's look at rare events, button presses event_id, tmin, tmax = 2, -0.2, 0.5 reject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=('meg', 'eeg'), baseline=None, reject=reject, preload=True) # Uncomment next line to use fewer samples and study regularization effects # epochs = epochs[:20] # For your data, use as many samples as you can!
stable/_downloads/64e3b6395952064c08d4ff33d6236ff3/evoked_whitening.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute covariance using automated regularization
method_params = dict(diagonal_fixed=dict(mag=0.01, grad=0.01, eeg=0.01)) noise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto', return_estimators=True, verbose=True, n_jobs=1, projs=None, rank=None, method_params=method_params) # With "return_estimator=True" all estimated covariances sorted # by log-likelihood are returned. print('Covariance estimates sorted from best to worst') for c in noise_covs: print("%s : %s" % (c['method'], c['loglik']))
stable/_downloads/64e3b6395952064c08d4ff33d6236ff3/evoked_whitening.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Show the evoked data:
evoked = epochs.average() evoked.plot(time_unit='s') # plot evoked response
stable/_downloads/64e3b6395952064c08d4ff33d6236ff3/evoked_whitening.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can then show whitening for our various noise covariance estimates. Here we should look to see if baseline signals match the assumption of Gaussian white noise. we expect values centered at 0 within 2 standard deviations for 95% of the time points. For the Global field power we expect a value of 1.
evoked.plot_white(noise_covs, time_unit='s')
stable/_downloads/64e3b6395952064c08d4ff33d6236ff3/evoked_whitening.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Example Model Some useful utilities . Remember that our image data is initially N x H x W x C, where: * N is the number of datapoints * H is the height of each image in pixels * W is the height of each image in pixels * C is the number of channels (usually 3: R, G, B) This is the right way to represent the data when we are doing something like a 2D convolution, which needs spatial understanding of where the pixels are relative to each other. When we input image data into fully connected affine layers, however, we want each data example to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. The example model itself The first step to training your own model is defining its architecture. Here's an example of a convolutional neural network defined in TensorFlow -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up. In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Hinge loss function, and the Adam optimizer being used. Make sure you understand why the parameters of the Linear layer are 5408 and 10. TensorFlow Details In TensorFlow, much like in our previous notebooks, we'll first specifically initialize our variables, and then our network model.
# clear old variables tf.reset_default_graph() # setup input (e.g. the data that changes every batch) # The first dim is None, and gets sets automatically based on batch size fed in X = tf.placeholder(tf.float32, [None, 32, 32, 3]) y = tf.placeholder(tf.int64, [None]) is_training = tf.placeholder(tf.bool) def simple_model(X,y): # define our weights (e.g. init_two_layer_convnet) # setup variables Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32]) bconv1 = tf.get_variable("bconv1", shape=[32]) W1 = tf.get_variable("W1", shape=[5408, 10]) b1 = tf.get_variable("b1", shape=[10]) # define our graph (e.g. two_layer_convnet) a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1 h1 = tf.nn.relu(a1) h1_flat = tf.reshape(h1,[-1,5408]) y_out = tf.matmul(h1_flat,W1) + b1 return y_out y_out = simple_model(X,y) # define our loss total_loss = tf.losses.hinge_loss(tf.one_hot(y,10),logits=y_out) mean_loss = tf.reduce_mean(total_loss) # define our optimizer optimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning rate train_step = optimizer.minimize(mean_loss)
cs231n/assignment/assignment2/TensorFlow.ipynb
gutouyu/cs231n
mit
TensorFlow supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful). Layers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn Optimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers BatchNorm: https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization Training the model on one epoch While we have defined a graph of operations above, in order to execute TensorFlow Graphs, by feeding them input data and computing the results, we first need to create a tf.Session object. A session encapsulates the control and state of the TensorFlow runtime. For more information, see the TensorFlow Getting started guide. Optionally we can also specify a device context such as /cpu:0 or /gpu:0. For documentation on this behavior see this TensorFlow guide You should see a validation loss of around 0.4 to 0.6 and an accuracy of 0.30 to 0.35 below
def run_model(session, predict, loss_val, Xd, yd, epochs=1, batch_size=64, print_every=100, training=None, plot_losses=False): # have tensorflow compute accuracy correct_prediction = tf.equal(tf.argmax(predict,1), y) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # shuffle indicies train_indicies = np.arange(Xd.shape[0]) np.random.shuffle(train_indicies) training_now = training is not None # setting up variables we want to compute (and optimizing) # if we have a training function, add that to things we compute variables = [mean_loss,correct_prediction,accuracy] if training_now: variables[-1] = training # counter iter_cnt = 0 for e in range(epochs): # keep track of losses and accuracy correct = 0 losses = [] # make sure we iterate over the dataset once for i in range(int(math.ceil(Xd.shape[0]/batch_size))): # generate indicies for the batch start_idx = (i*batch_size)%Xd.shape[0] idx = train_indicies[start_idx:start_idx+batch_size] # create a feed dictionary for this batch feed_dict = {X: Xd[idx,:], y: yd[idx], is_training: training_now } # get batch size actual_batch_size = yd[idx].shape[0] # have tensorflow compute loss and correct predictions # and (if given) perform a training step loss, corr, _ = session.run(variables,feed_dict=feed_dict) # aggregate performance stats losses.append(loss*actual_batch_size) correct += np.sum(corr) # print every now and then if training_now and (iter_cnt % print_every) == 0: print("Iteration {0}: with minibatch training loss = {1:.3g} and accuracy of {2:.2g}"\ .format(iter_cnt,loss,np.sum(corr)/actual_batch_size)) iter_cnt += 1 total_correct = correct/Xd.shape[0] total_loss = np.sum(losses)/Xd.shape[0] print("Epoch {2}, Overall loss = {0:.3g} and accuracy of {1:.3g}"\ .format(total_loss,total_correct,e+1)) if plot_losses: plt.plot(losses) plt.grid(True) plt.title('Epoch {} Loss'.format(e+1)) plt.xlabel('minibatch number') plt.ylabel('minibatch loss') plt.show() return total_loss,total_correct with tf.Session() as sess: with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0" sess.run(tf.global_variables_initializer()) print('Training') run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step,True) print('Validation') run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
cs231n/assignment/assignment2/TensorFlow.ipynb
gutouyu/cs231n
mit
Training a specific model In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the TensorFlow documentation and configuring your own model. Using the code provided above as guidance, and using the following TensorFlow documentation, specify a model with the following architecture: 7x7 Convolutional Layer with 32 filters and stride of 1 ReLU Activation Layer Spatial Batch Normalization Layer (trainable parameters, with scale and centering) 2x2 Max Pooling layer with a stride of 2 Affine layer with 1024 output units ReLU Activation Layer Affine layer from 1024 input units to 10 outputs
# clear old variables tf.reset_default_graph() # define our input (e.g. the data that changes every batch) # The first dim is None, and gets sets automatically based on batch size fed in X = tf.placeholder(tf.float32, [None, 32, 32, 3]) y = tf.placeholder(tf.int64, [None]) is_training = tf.placeholder(tf.bool) # define model def complex_model(X,y,is_training): pass y_out = complex_model(X,y,is_training)
cs231n/assignment/assignment2/TensorFlow.ipynb
gutouyu/cs231n
mit
To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
# Now we're going to feed a random batch into the model # and make sure the output is the right size x = np.random.randn(64, 32, 32,3) with tf.Session() as sess: with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0" tf.global_variables_initializer().run() ans = sess.run(y_out,feed_dict={X:x,is_training:True}) %timeit sess.run(y_out,feed_dict={X:x,is_training:True}) print(ans.shape) print(np.array_equal(ans.shape, np.array([64, 10])))
cs231n/assignment/assignment2/TensorFlow.ipynb
gutouyu/cs231n
mit
You should see the following from the run above (64, 10) True GPU! Now, we're going to try and start the model under the GPU device, the rest of the code stays unchanged and all our variables and operations will be computed using accelerated code paths. However, if there is no GPU, we get a Python exception and have to rebuild our graph. On a dual-core CPU, you might see around 50-80ms/batch running the above, while the Google Cloud GPUs (run below) should be around 2-5ms/batch.
try: with tf.Session() as sess: with tf.device("/gpu:0") as dev: #"/cpu:0" or "/gpu:0" tf.global_variables_initializer().run() ans = sess.run(y_out,feed_dict={X:x,is_training:True}) %timeit sess.run(y_out,feed_dict={X:x,is_training:True}) except tf.errors.InvalidArgumentError: print("no gpu found, please use Google Cloud if you want GPU acceleration") # rebuild the graph # trying to start a GPU throws an exception # and also trashes the original graph tf.reset_default_graph() X = tf.placeholder(tf.float32, [None, 32, 32, 3]) y = tf.placeholder(tf.int64, [None]) is_training = tf.placeholder(tf.bool) y_out = complex_model(X,y,is_training)
cs231n/assignment/assignment2/TensorFlow.ipynb
gutouyu/cs231n
mit
You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on. Train the model. Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the complex_model you created provided above). Make sure you understand how each TensorFlow function used below corresponds to what you implemented in your custom neural network implementation. First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function. See the TensorFlow documentation for more information * Layers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn * Optimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers
# Inputs # y_out: is what your model computes # y: is your TensorFlow variable with label information # Outputs # mean_loss: a TensorFlow variable (scalar) with numerical loss # optimizer: a TensorFlow optimizer # This should be ~3 lines of code! mean_loss = None optimizer = None pass # batch normalization in tensorflow requires this extra dependency extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(extra_update_ops): train_step = optimizer.minimize(mean_loss)
cs231n/assignment/assignment2/TensorFlow.ipynb
gutouyu/cs231n
mit
Train the model Below we'll create a session and train the model over one epoch. You should see a loss of 1.4 to 2.0 and an accuracy of 0.4 to 0.5. There will be some variation due to random seeds and differences in initialization
sess = tf.Session() sess.run(tf.global_variables_initializer()) print('Training') run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step)
cs231n/assignment/assignment2/TensorFlow.ipynb
gutouyu/cs231n
mit
Check the accuracy of the model. Let's see the train and test code in action -- feel free to use these methods when evaluating the models you develop below. You should see a loss of 1.3 to 2.0 with an accuracy of 0.45 to 0.55.
print('Validation') run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
cs231n/assignment/assignment2/TensorFlow.ipynb
gutouyu/cs231n
mit
Train a great model on CIFAR-10! Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >= 70% accuracy on the validation set of CIFAR-10. You can use the run_model function from above. Things you should try: Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient Number of filters: Above we used 32 filters. Do more or fewer do better? Pooling vs Strided Convolution: Do you use max pooling or just stride convolutions? Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster? Network architecture: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include: [conv-relu-pool]xN -> [affine]xM -> [softmax or SVM] [conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM] [batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM] Use TensorFlow Scope: Use TensorFlow scope and/or tf.layers to make it easier to write deeper networks. See this tutorial for how to use tf.layers. Use Learning Rate Decay: As the notes point out, decaying the learning rate might help the model converge. Feel free to decay every epoch, when loss doesn't change over an entire epoch, or any other heuristic you find appropriate. See the Tensorflow documentation for learning rate decay. Global Average Pooling: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in Google's Inception Network (See Table 1 for their architecture). Regularization: Add l2 weight regularization, or perhaps use Dropout as in the TensorFlow MNIST tutorial Tips for training For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind: If the parameters are working well, you should see improvement within a few hundred iterations Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all. Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs. You should use the validation set for hyperparameter search, and we'll save the test set for evaluating your architecture on the best parameters as selected by the validation set. Going above and beyond If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit. Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta. Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut. Model ensembles Data augmentation New Architectures ResNets where the input from the previous layer is added to the output. DenseNets where inputs into previous layers are concatenated together. This blog has an in-depth overview If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below. What we expect At the very least, you should be able to train a ConvNet that gets at >= 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches. You should use the space below to experiment and train your network. The final cell in this notebook should contain the training and validation set accuracies for your final trained network. Have fun and happy training!
# Feel free to play with this cell def my_model(X,y,is_training): pass tf.reset_default_graph() X = tf.placeholder(tf.float32, [None, 32, 32, 3]) y = tf.placeholder(tf.int64, [None]) is_training = tf.placeholder(tf.bool) y_out = my_model(X,y,is_training) mean_loss = None optimizer = None pass # batch normalization in tensorflow requires this extra dependency extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(extra_update_ops): train_step = optimizer.minimize(mean_loss) # Feel free to play with this cell # This default code creates a session # and trains your model for 10 epochs # then prints the validation set accuracy sess = tf.Session() sess.run(tf.global_variables_initializer()) print('Training') run_model(sess,y_out,mean_loss,X_train,y_train,10,64,100,train_step,True) print('Validation') run_model(sess,y_out,mean_loss,X_val,y_val,1,64) # Test your model here, and make sure # the output of this cell is the accuracy # of your best model on the training and val sets # We're looking for >= 70% accuracy on Validation print('Training') run_model(sess,y_out,mean_loss,X_train,y_train,1,64) print('Validation') run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
cs231n/assignment/assignment2/TensorFlow.ipynb
gutouyu/cs231n
mit
Describe what you did here In this cell you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network Tell us here Test Set - Do this only once Now that we've gotten a result that we're happy with, we test our final model on the test set. This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.
print('Test') run_model(sess,y_out,mean_loss,X_test,y_test,1,64)
cs231n/assignment/assignment2/TensorFlow.ipynb
gutouyu/cs231n
mit
Chi-Nu Array Detector Angles Author: Patricia Schuster Date: Fall 2016/Winter 2017 Institution: University of Michigan NERS Email: pfschus@umich.edu What are we doing today? Goal: Import and analyze the angles between all of the detector pairs in the Chi-Nu array. As a reminder, this is what the Chi-Nu array looks like:
%%html <img src="fig/setup.png",width=80%,height=80%>
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
There are 45 detectors in this array, making for 990 detector pairs:
45*44/2
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
In order to characterize the angular distribution of the neutrons and gamma-rays emitted in a fission interaction, we are going to analyze the data from pairs of detectors at different angles from one another. In this notebook I am going to import the detector angle data that Matthew provided me and explore the data. 1) Import the angular data to a dictionary 2) Visualize the angular data 3) Find detector pairs in a given angular range 4) Generate pairs vs. angle ranges
# Import packages import os.path import time import numpy as np np.set_printoptions(threshold=np.nan) # print entire matrices import sys import inspect import matplotlib.pyplot as plt import scipy.io as sio from tqdm import * import pandas as pd import seaborn as sns sns.set_palette('spectral') sns.set_style(style='white') sys.path.append('../scripts/') import bicorr as bicorr %load_ext autoreload %autoreload 2
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Step 1: Initialize pandas DataFrame with detector pairs The detector pair angles are stored in a file lanl_detector_angles.mat. Write a function to load it as an array and then generate a pandas DataFrame This was done before in bicorr.build_dict_det_pair(). Replace with a pandas dataFrame. Columns will be: Detector 1 Detector 2 Index in bicorr_hist_master Angle between detectors We can add more columns later very easily. Load channel lists Use the function bicorr.built_ch_lists() to generate numpy arrays with all of the channel numbers:
help(bicorr.build_ch_lists) chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists(print_flag = True)
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Initialize dataFrame with detector channel numbers
det_df = pd.DataFrame(columns=('d1', 'd2', 'd1d2', 'angle'))
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
The pandas dataFrame should have 990 entries, one for each detector pair. Generate this.
# Fill pandas dataFrame with d1, d2, and d1d2 count = 0 det_pair_chs = np.zeros(num_det_pairs,dtype=np.int) # Loop through all detector pairs for i1 in np.arange(0,num_dets): det1ch = detList[i1] for i2 in np.arange(i1+1,num_dets): det2ch = detList[i2] det_df.loc[count,'d1' ] = det1ch det_df.loc[count,'d2' ] = det2ch det_df.loc[count,'d1d2'] = 100*det1ch+det2ch count = count+1 det_df.head() plt.plot(det_df['d1d2'],det_df.index,'.k') plt.xlabel('Detector pair (100*det1ch+det2ch)') plt.ylabel('Index in det_df') plt.title('Mapping between detector pair and index') plt.show()
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Visualize the dataFrame so far Try using the built-in pandas.DataFrame.plot method.
ax = det_df.plot('d1','d2',kind='scatter', marker = 's',edgecolor='none',s=13, c='d1d2') plt.xlim([0,50]) plt.ylim([0,50]) ax.set_aspect('equal') plt.xlabel('Detector 1 channel') plt.ylabel('Detector 2 channel') plt.show()
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
There are some problems with displaying the labels, so instead I will use matplotlib directly. I am writing a function to generate this plot since I will likely want to view it a lot.
bicorr.plot_det_df(det_df, which=['index'])
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Step 2: Fill angles column The lanl_detector_angles.mat file is located in my measurements folder:
os.listdir('../meas_info/')
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
What does this file look like? Import the .mat file and take a look.
det2detAngle = sio.loadmat('../meas_info/lanl_detector_angles.mat')['det2detAngle'] det2detAngle.shape plt.pcolormesh(det2detAngle, cmap = "viridis") cbar = plt.colorbar() cbar.set_label('Angle (degrees)') plt.xlabel('Detector 1') plt.ylabel('Detector 2') plt.show()
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
The array currently is ndets x ndets with an angle at every index. This is twice as many entries as we need because pairs are repeated at (d1,d2) and (d2,d1). Loop through the pairs and store the angle to the dataframe. Fill the 'angle' column of the DataFrame:
for pair in det_df.index: det_df.loc[pair,'angle'] = det2detAngle[int(det_df.loc[pair,'d1'])][int(det_df.loc[pair,'d2'])] det_df.head()
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Visualize the angular data
bicorr.plot_det_df(det_df,which=['angle'])
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Verify accuracy of pandas method Make use of git to checkout old versions. Previously, I generated a dictionary that mapped the detector pair d1d2 index to the angle. Verify that the new method using pandas is producing the same array of angles. Old version using channel lists, dictionary
dict_pair_to_index, dict_index_to_pair = bicorr.build_dict_det_pair() dict_pair_to_angle = bicorr.build_dict_pair_to_angle(dict_pair_to_index,foldername='../../measurements/') det1ch_old, det2ch_old, angle_old = bicorr.unpack_dict_pair_to_angle(dict_pair_to_angle)
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
New method using pandas det_df
det_df = bicorr.load_det_df() det1ch_new = det_df['d1'].values det2ch_new = det_df['d2'].values angle_new = det_df['angle'].values
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Compare the two
plt.plot([0,180],[0,180],'r') plt.plot(angle_old, angle_new, '.k') plt.xlabel('Angle old (degrees)') plt.ylabel('Angle new (degrees)') plt.title('Compare angles from new and old method') plt.show()
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Are the angle vectors within 0.001 degrees of each other? If so, then consider the two equal.
np.sum((angle_old - angle_new) < 0.001)
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Yes, consider them the same. Step 3: Extract information from det_df I need to exact information from det_df using the pandas methods. What are a few things I want to do?
det_df.head()
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Return rows that meet a given condition There are two primary methods for accessing rows in the dataFrame that meet certain conditions. In our case, the conditions may be which detector pairs or which angle ranges we want to access. Return a True/False mask indicating which entries meet the conditions Return a pandas Index structure containing the indices of those entries As an example, I will look for rows in which d2=8. As a note, this will not be all entries in which channel 8 was involved because there are other pairs in which d1=8 that will not be included. Return the rows Start with the mask method, which can be used to store our conditions.
d = 8 ind_mask = (det_df['d2'] == d) # Get a glimpse of the mask's first five elements ind_mask.head() # View the mask entries that are equal to true ind_mask[ind_mask]
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
The other method is to use the .index method to return a pandas index structure. Pull the indices from det_df using the mask.
ind = det_df.index[ind_mask] print(ind)
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Count the number of rows Using the mask
np.sum(ind_mask)
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Using the index structure
len(ind)
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Extract information for a single detector Find indices for that detector
# A single detector, may be d1 or d2 d = 8 ind_mask = (det_df['d1']==d) | (det_df['d2']==d) ind = det_df.index[ind_mask]
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
These lines can be accessed in det_df directly.
det_df[ind_mask].head()
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Return a list of the other detector pair Since the detector may be d1 or d2, I may need to return a list of the other pair, regardless of the order. How can I generate an array of the other detector in the pair?
det_df_this_det = det_df.loc[ind,['d1','d2']] det_df_this_det.head()
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
This is a really stupid method, but I can multiply the two detectors together and then divide by 8 to divide out that channel.
det_df_this_det['dN'] = det_df_this_det.d1 * det_df_this_det.d2 / d det_df_this_det.head() plt.plot(det_df_this_det['dN'],'.k') plt.xlabel('Array in dataFrame') plt.ylabel('dN (other channel)') plt.title('Other channel for pairs including ch '+str(d)) plt.show()
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Return the angles
plt.plot(det_df.loc[ind,'angle'],'.k') plt.xlabel('Index') plt.ylabel('Angle between pairs') plt.title('Angle for pairs including ch '+ str(d)) plt.show() plt.plot(det_df_this_det['dN'],det_df.loc[ind,'angle'],'.k') plt.axvline(d,color='r') plt.xlabel('dN (other channel)') plt.ylabel('Angle between pairs') plt.title('Angle for pairs including ch '+ str(d)) plt.show()
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Extract information for a given pair Find indices for that pair
d1 = 1 d2 = 4 if d2 < d1: print('Warning: d2 < d1. Channels inverted') ind_mask = (det_df['d1']==d1) & (det_df['d2']==d2) ind = det_df.index[ind_mask] det_df[ind_mask] det_df[ind_mask]['angle']
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
I will write a function that returns the index.
bicorr.d1d2_index(det_df,4,1)
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Compare to speed of dictionary For a large number of detector pairs, which is faster for retrieving the indices?
bicorr_data = bicorr.load_bicorr(bicorr_path = '../2017_01_09_pfs_build_bicorr_hist_master/1/bicorr1') bicorr_data.shape det_df = bicorr.load_det_df() dict_pair_to_index, dict_index_to_pair = bicorr.build_dict_det_pair() d1 = 4 d2 = 8 print(dict_pair_to_index[100*d1+d2]) print(bicorr.d1d2_index(det_df,d1,d2))
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Loop through bicorr_data and generate the index for all pairs. Using the dictionary method
start_time = time.time() for i in tqdm(np.arange(bicorr_data.size),ascii=True): d1 = bicorr_data[i]['det1ch'] d2 = bicorr_data[i]['det2ch'] index = dict_pair_to_index[100*d1+d2] print(time.time()-start_time)
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Using the pandas dataFrame method
start_time = time.time() for i in tqdm(np.arange(bicorr_data.size),ascii=True): d1 = bicorr_data[i]['det1ch'] d2 = bicorr_data[i]['det2ch'] index = bicorr.d1d2_index(det_df,d1,d2) print(time.time()-start_time)
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
I'm not going to run this because tqdm says it will take approximately 24 minutes. So instead I should go with the dict method. But I would like to produce the dictionary from the pandas array directly. Produce dictionaries from det_df Instead of relying on dict_pair_to_index all the time, I will generate it on the fly when filling bicorr_hist_master in build_bicorr_hist_master since that function requires generating the index so many times. The three dictionaries that I need are: dict_pair_to_index dict_index_to_pair dict_pair_to_angle
det_df.index det_df.head() det_df[['d1d2','d2']].head() dict_index_to_pair = det_df['d1d2'].to_dict() dict_pair_to_index = {v: k for k, v in dict_index_to_pair.items()} dict_pair_to_angle = pd.Series(det_df['angle'].values,index=det_df['d1d2']).to_dict()
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Functionalize these dictionaries so I can produce them on the fly.
help(bicorr.build_dict_det_pair) dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df)
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Instructions: Save, load det_df file I'm going to store the dataFrame using to_pickle. At this point, it only contains information on the pairs and angles. No bin column has been added.
det_df.to_pickle('../meas_info/det_df_pairs_angles.pkl') det_df.to_csv('../meas_info/det_df_pairs_angles.csv',index = False)
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Revive the dataFrame from the .pkl file. Write a function to do this automatically. Option to display plots.
help(bicorr.load_det_df) det_df = bicorr.load_det_df() det_df.head() det_df = bicorr.load_det_df() bicorr.plot_det_df(det_df, show_flag = True, which = ['index']) bicorr.plot_det_df(det_df, show_flag = True, which = ['angle'])
methods/build_det_df_angles_pairs.ipynb
pfschus/fission_bicorrelation
mit
Let us begin by developing a convenient method for displaying images in our notebooks.
img = sitk.GaussianSource(size=[64] * 2) plt.imshow(sitk.GetArrayViewFromImage(img)) img = sitk.GaborSource(size=[64] * 2, frequency=0.03) plt.imshow(sitk.GetArrayViewFromImage(img)) def myshow(img): nda = sitk.GetArrayViewFromImage(img) plt.imshow(nda) myshow(img)
Python/02_Pythonic_Image.ipynb
InsightSoftwareConsortium/SimpleITK-Notebooks
apache-2.0