repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
muratcemkose/cy-rest-python
|
advanced/CytoscapeREST_KEGG_time_series.ipynb
|
mit
|
import json
import requests
import pandas as pd
PORT_NUMBER = 1234
BASE_URL = "http://localhost:" + str(PORT_NUMBER) + "/v1/"
HEADERS = {'Content-Type': 'application/json'}
"""
Explanation: Visualizing time series metabolome profile
by Kozo Nishida (Riken, Japan)
Software Requirments
Please install the following software packages to run this workflow:
KEGGscape
enhancedGraphics
Background
This is a sample workflow to automate complex Cytoscape data integaration/visualization process. Please read the following document for more background:
https://github.com/idekerlab/KEGGscape/wiki/How-to-visualize-time-series-metabolome-profile
End of explanation
"""
pathway_location = "http://rest.kegg.jp/get/ath00020/kgml"
res1 = requests.post(BASE_URL + "networks?source=url", data=json.dumps([pathway_location]), headers=HEADERS)
result = json.loads(res1.content)
pathway_suid = result[0]["networkSUID"][0]
print("Pathway SUID = " + str(pathway_suid))
"""
Explanation: Load a KGML pathway data file from KEGG REST API
End of explanation
"""
profile_csv = "https://raw.githubusercontent.com/idekerlab/KEGGscape/develop/wiki/data/light-dark-20.csv"
profile_df = pd.read_csv(profile_csv)
profile_df.head()
"""
Explanation: Load table data file as Pandas DataFrame
End of explanation
"""
profile = json.loads(profile_df.to_json(orient="records"))
# print(json.dumps(profile, indent=4))
new_table_data = {
"key": "KEGG_NODE_LABEL",
"dataKey": "KEGG",
"data": profile
}
update_table_url = BASE_URL + "networks/" + str(pathway_suid) + "/tables/defaultnode"
requests.put(update_table_url, data=json.dumps(new_table_data), headers=HEADERS)
"""
Explanation: Convert the DataFrame to JSON and send it to Cytoscape
End of explanation
"""
chart_entry = 'barchart: attributelist="ld20t14,ld20t16,ld20t20,ld20t24,ld20t28,ld20t32,ld20t36,ld20t40,ld20t44,ld20t48,ld20t52,ld20t56,ld20t60,ld20t64,ld20t68,ld20t72" colorlist="up:red,zero:red,down:red" showlabels="false"'
target_row_url = BASE_URL + "networks/" + str(pathway_suid) + "/tables/defaultnode/columns/KEGG"
res2 = requests.get(target_row_url)
matched = json.loads(res2.content)["values"]
df2 = pd.DataFrame(columns=["id", "chart"]);
df2["id"] = matched
df2["chart"] = chart_entry
data = json.loads(df2.to_json(orient="records"))
chart_data = {
"key": "KEGG",
"dataKey": "id",
"data": data
}
requests.put(update_table_url, data=json.dumps(chart_data), headers=HEADERS)
"""
Explanation: Set values to the chart column
End of explanation
"""
custom_graphics_mapping = {
"mappingType" : "passthrough",
"mappingColumn" : "chart",
"mappingColumnType" : "String",
"visualProperty" : "NODE_CUSTOMGRAPHICS_1"
}
style_url = BASE_URL + "styles/KEGG Style/mappings"
requests.post(style_url, data=json.dumps([custom_graphics_mapping]), headers=HEADERS)
"""
Explanation: Create Visual Style for Custom Mapping
End of explanation
"""
|
scollis/scipy_2015
|
example/test.ipynb
|
bsd-2-clause
|
import numpy as np
import pandas as pd
import pandas.io.data as pdd
from urllib import urlretrieve
%matplotlib inline
"""
Explanation: <img src="CA_logo.png" alt="Continuum Analytics" width="20%" align="right" border="4"><br><br><br><br>
Interactive Financial Analytics with Python & IPython
Tutorial with Examples based on the VSTOXX Volatility Index
Dr. Yves J. Hilpisch
Continuum Analytics Europe GmbH
<a href="http://www.continuum.io" target="_blank">www.continuum.io</a>
<a href="mailto:yves@continuum.io">yves@continuum.io</a>
<a href="http://twitter.com/dyjh" target="_blank">@dyjh</a>
For Python Quants – 14. March 2014
You find the presentation and the IPython Notebook here:
<a href="http://www.hilpisch.com/YH_FPQ_Volatility_Tutorial.html" target="_blank">http://www.hilpisch.com/YH_FPQ_Volatility_Tutorial.html</a>
<a href="http://www.hilpisch.com/YH_FPQ_Volatility_Tutorial.ipynb" target="_blank">http://www.hilpisch.com/YH_FPQ_Volatility_Tutorial.ipynb</a>
About Me
A brief bio:
Managing Director Europe of Continuum Analytics Inc.
Founder of Visixion GmbH – The Python Quants
Lecturer Mathematical Finance at Saarland University
Focus on Financial Industry and Financial Analytics
Book "Derivatives Analytics with Python" (2013)
Book "Python for Finance" O'Reilly (2014)
Dr.rer.pol in Mathematical Finance
Graduate in Business Administration
Martial Arts Practitioner and Fan
See <a href="http://www.hilpisch.com" target="_blank">www.hilpisch.com</a>.
<img src="python_for_finance.png" alt="Python for Finance" style="width: 45%; border: 2px solid black;">
<a href="http://shop.oreilly.com/product/0636920032441.do" target="_blank">Python for Finance (O'Reilly Shop)</a>
Python for Analytics
This tutorial focuses on
Python as a general purpose financial analytics environment
interactive analytics examples
prototyping-like Python usage
It does not address such important issues like
architectural issues regarding hardware and software
development processes, testing, documentation and production
real world problem modeling
A fundamental Python stack for interactive data analytics and visualization should at least contain the following libraries tools:
Python – the Python interpreter itself
NumPy – high performance, flexible array structures and operations
SciPy – collection of scientific modules and functions (e.g. for regression, optimization, integration)
pandas – time series and panel data analysis and I/O
PyTables – hierarchical, high performance database (e.g. for out-of-memory analytics)
matplotlib – 2d and 3d visualization
IPython – interactive data analytics, visualization, publishing
It is best to use e.g. a Python distribution like Anaconda to ensure consistency of libraries.
First Financial Analytics Example
We need to make a couple of imports for what is to come.
End of explanation
"""
try:
index = pdd.DataReader('^GDAXI', data_source='yahoo', start='2007/3/30')
# e.g. the EURO STOXX 50 ticker symbol -- ^SX5E
except:
index = pd.read_csv('dax.txt', index_col=0, parse_dates=True)
index.info()
"""
Explanation: The convenience function DataReader makes it easy to read historical stock price data from Yahoo! Finance (http://finance.yahoo.com).
End of explanation
"""
index.tail()
"""
Explanation: pandas strength is the handling of indexed/labeled/structured data, like times series data.
End of explanation
"""
index['Returns'] = np.log(index['Close'] / index['Close'].shift(1))
"""
Explanation: pandas makes it easy to implement vectorized operations, like calculating log-returns over whole time series.
End of explanation
"""
index[['Close', 'Returns']].plot(subplots=True, style='b', figsize=(8, 5))
"""
Explanation: In addition, pandas makes plotting quite simple and compact.
End of explanation
"""
index['Mov_Vol'] = pd.rolling_std(index['Returns'], window=252) * np.sqrt(252)
"""
Explanation: We now want to check how annual volatility changes over time.
End of explanation
"""
index[['Close', 'Returns', 'Mov_Vol']].plot(subplots=True, style='b', figsize=(8, 5))
"""
Explanation: Obviously, the annual volatility changes significantly over time.
End of explanation
"""
import pandas as pd
import datetime as dt
from urllib import urlretrieve
try:
es_url = 'http://www.stoxx.com/download/historical_values/hbrbcpe.txt'
vs_url = 'http://www.stoxx.com/download/historical_values/h_vstoxx.txt'
urlretrieve(es_url, 'es.txt')
urlretrieve(vs_url, 'vs.txt')
except:
pass
"""
Explanation: Exercise
Trend-based investment strategy with the EURO STOXX 50 index:
2 trends 42d & 252d
long, short, cash positions
no transaction costs
Signal generation:
invest (go long) when the 42d trend is more than 100 points above the 252d trend
sell (go short) when the 42d trend is more than 20 points below the 252d trend
invest in cash (no interest) when neither of both is true
Historical Correlation between EURO STOXX 50 and VSTOXX
It is a stylized fact that stock indexes and related volatility indexes are highly negatively correlated. The following example analyzes this stylized fact based on the EURO STOXX 50 stock index and the VSTOXX volatility index using Ordinary Least-Squares regession (OLS).
First, we collect historical data for both the EURO STOXX 50 stock and the VSTOXX volatility index.
End of explanation
"""
lines = open('es.txt').readlines() # reads the whole file line-by-line
lines[:5] # header not well formatted
"""
Explanation: The EURO STOXX 50 data is not yet in the right format. Some house cleaning is necessary (I).
End of explanation
"""
lines[3883:3890] # from 27.12.2001 additional semi-colon
"""
Explanation: The EURO STOXX 50 data is not yet in the right format. Some house cleaning is necessary (II).
End of explanation
"""
lines = open('es.txt').readlines() # reads the whole file line-by-line
new_file = open('es50.txt', 'w') # opens a new file
new_file.writelines('date' + lines[3][:-1].replace(' ', '') + ';DEL' + lines[3][-1])
# writes the corrected third line (additional column name)
# of the orginal file as first line of new file
new_file.writelines(lines[4:]) # writes the remaining lines of the orginal file
"""
Explanation: The EURO STOXX 50 data is not yet in the right format. Some house cleaning is necessary (III).
End of explanation
"""
list(open('es50.txt'))[:5] # opens the new file for inspection
"""
Explanation: The EURO STOXX 50 data is not yet in the right format. Some house cleaning is necessary (IV).
End of explanation
"""
es = pd.read_csv('es50.txt', index_col=0, parse_dates=True, sep=';', dayfirst=True)
del es['DEL'] # delete the helper column
es.info()
"""
Explanation: Now, the data can be safely read into a DataFrame object.
End of explanation
"""
vs = pd.read_csv('vs.txt', index_col=0, header=2, parse_dates=True, sep=',', dayfirst=True)
# you can alternatively read from the Web source directly
# without saving the csv file to disk:
# vs = pd.read_csv(vs_url, index_col=0, header=2,
# parse_dates=True, sep=',', dayfirst=True)
"""
Explanation: The VSTOXX data can be read without touching the raw data.
End of explanation
"""
import datetime as dt
data = pd.DataFrame({'EUROSTOXX' :
es['SX5E'][es.index > dt.datetime(1999, 12, 31)]})
data = data.join(pd.DataFrame({'VSTOXX' :
vs['V2TX'][vs.index > dt.datetime(1999, 12, 31)]}))
data.info()
"""
Explanation: We now merge the data for further analysis.
End of explanation
"""
data.head()
"""
Explanation: Let's inspect the two time series.
End of explanation
"""
data.plot(subplots=True, grid=True, style='b', figsize=(10, 5))
"""
Explanation: A picture can tell almost the complete story.
End of explanation
"""
rets = np.log(data / data.shift(1))
rets.head()
"""
Explanation: We now generate log returns for both time series.
End of explanation
"""
xdat = rets['EUROSTOXX']
ydat = rets['VSTOXX']
model = pd.ols(y=ydat, x=xdat)
model
"""
Explanation: To this new data set, also stored in a DataFrame object, we apply OLS.
End of explanation
"""
import matplotlib.pyplot as plt
plt.plot(xdat, ydat, 'r.')
ax = plt.axis() # grab axis values
x = np.linspace(ax[0], ax[1] + 0.01)
plt.plot(x, model.beta[1] + model.beta[0] * x, 'b', lw=2)
plt.grid(True)
plt.axis('tight')
"""
Explanation: Again, we want to see how our results look graphically.
End of explanation
"""
import matplotlib as mpl
mpl_dates = mpl.dates.date2num(rets.index)
plt.figure(figsize=(8, 4))
plt.scatter(rets['EUROSTOXX'], rets['VSTOXX'], c=mpl_dates, marker='o')
plt.grid(True)
plt.xlabel('EUROSTOXX')
plt.ylabel('VSTOXX')
plt.colorbar(ticks=mpl.dates.DayLocator(interval=250),
format=mpl.dates.DateFormatter('%d %b %y'))
"""
Explanation: Let us see if we can identify systematics over time. And indeed, during the crisis 2007/2008 (yellow dots) volatility has been more pronounced than more recently (red dots).
End of explanation
"""
data = data.dropna()
data = data / data.ix[0] * 100
data.head()
"""
Explanation: Exercise
We want to test whether the EURO STOXX 50 and/or the VSTOXX returns are normally distributed or not (e.g. if they might have fat tails). We want to do a
graphical illustration (using qqplot of statsmodels.api) and a
statistical test (using normaltest of scipy.stats)
Add on: plot a histogram of the log return frequencies and compare that to a normal distribution with same mean and variance (using e.g. norm.pdf from scipy.stats)
Constant Proportion VSTOXX Investment
There has been a number of studies which have illustrated that constant proportion investments in volatility derivatives – given a diversified equity portfolio – might improve investment performance considerably. See, for instance, the study
<a href="http://www.eurexgroup.com/group-en/newsroom/60036/" target="_blank">The Benefits of Volatility Derivatives in Equity Portfolio Management</a>
We now want to replicate (in a simplified fashion) what you can flexibly test here on the basis of two backtesting applications for VSTOXX-based investment strategies:
<a href="http://www.eurexchange.com/vstoxx/app1/" target="_blank">Two Assets Backtesting</a>
<a href="http://www.eurexchange.com/vstoxx/app2/" target="_blank">Four Assets Backtesting</a>
The strategy we are going to implement and test is characterized as follows:
An investor has total wealth of say 100,000 EUR
He invests, say, 70% of that into a diversified equity portfolio
The remainder, i.e. 30%, is invested in the VSTOXX index directly
Through (daily) trading the investor keeps the proportions constant
No transaction costs apply, all assets are infinitely divisible
We already have the necessary data available. However, we want to drop 'NaN' values and want to normalize the index values.
End of explanation
"""
invest = 100
cratio = 0.3
data['Equity'] = (1 - cratio) * invest / data['EUROSTOXX'][0]
data['Volatility'] = cratio * invest / data['VSTOXX'][0]
"""
Explanation: First, the initial invest.
End of explanation
"""
data['Static'] = (data['Equity'] * data['EUROSTOXX']
+ data['Volatility'] * data['VSTOXX'])
data[['EUROSTOXX', 'Static']].plot(figsize=(10, 5))
"""
Explanation: This can already be considered an static investment strategy.
End of explanation
"""
for i in range(1, len(data)):
evalue = data['Equity'][i - 1] * data['EUROSTOXX'][i]
# value of equity position
vvalue = data['Volatility'][i - 1] * data['VSTOXX'][i]
# value of volatility position
tvalue = evalue + vvalue
# total wealth
data['Equity'][i] = (1 - cratio) * tvalue / data['EUROSTOXX'][i]
# re-allocation of total wealth to equity ...
data['Volatility'][i] = cratio * tvalue / data['VSTOXX'][i]
# ... and volatility position
"""
Explanation: Second, the dynamic strategy with daily adjustments to keep the value ratio constant.
End of explanation
"""
data['Dynamic'] = (data['Equity'] * data['EUROSTOXX']
+ data['Volatility'] * data['VSTOXX'])
data.head()
"""
Explanation: Third, the total wealth position.
End of explanation
"""
(data['Volatility'] * data['VSTOXX'] / data['Dynamic'])[:5]
(data['Equity'] * data['EUROSTOXX'] / data['Dynamic'])[:5]
"""
Explanation: A brief check if the ratios are indeed constant.
End of explanation
"""
data[['EUROSTOXX', 'Dynamic']].plot(figsize=(10, 5))
"""
Explanation: Let us inspect the performance of the strategy.
End of explanation
"""
try:
url = 'http://hopey.netfonds.no/posdump.php?'
url += 'date=%s%s%s&paper=AAPL.O&csv_format=csv' % ('2014', '03', '12')
# you may have to adjust the date since only recent dates are available
urlretrieve(url, 'aapl.csv')
except:
pass
AAPL = pd.read_csv('aapl.csv', index_col=0, header=0, parse_dates=True)
AAPL.info()
"""
Explanation: Exercise
Write a Python function which allows for an arbitrary but constant ratio to be invested in the VSTOXX index and which returns net performance values (in percent) for the constant proportion VSTOXX strategy.
Add on: find the ratio to be invested in the VSTOXX that gives the maximum performance.
Analyzing High Frequency Data
Using standard Python functionality and pandas, the code that follows reads intraday, high-frequency data from a Web source, plots it and resamples it.
End of explanation
"""
AAPL['bid'].plot()
AAPL = AAPL[AAPL.index > dt.datetime(2014, 3, 12, 10, 0, 0)]
# only data later than 10am at that day
"""
Explanation: The intraday evolution of the Apple stock price.
End of explanation
"""
# this resamples the record frequency to 5 minutes, using mean as aggregation rule
AAPL_5min = AAPL.resample(rule='5min', how='mean').fillna(method='ffill')
AAPL_5min.head()
"""
Explanation: A resampling of the data is easily accomplished with pandas.
End of explanation
"""
AAPL_5min['bid'].plot()
"""
Explanation: Let's have a graphical look at the new data set.
End of explanation
"""
AAPL_5min['bid'].apply(lambda x: 2 * 530 - x).plot()
# this mirrors the stock price development at
"""
Explanation: With pandas you can easily apply custom functions to time series data.
End of explanation
"""
|
kubeflow/examples
|
natural-language-processing-with-disaster-tweets-kaggle-competition/natural-language-processing-with-disaster-tweets-kale.ipynb
|
apache-2.0
|
pip install -r requirements.txt
"""
Explanation: Basic Intro
In this competition, you’re challenged to build a machine learning model that predicts which Tweets are about real disasters and which one’s aren’t.
What's in this kernel?
Basic EDA
Data Cleaning
Baseline Model
Unzipping the file
Importing required Libraries.
End of explanation
"""
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
from nltk.util import ngrams
from sklearn.feature_extraction.text import CountVectorizer
from collections import defaultdict
from collections import Counter
plt.style.use('ggplot')
stop=set(stopwords.words('english'))
import re
from nltk.tokenize import word_tokenize
import gensim
import string
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from tqdm import tqdm
from keras.models import Sequential
from keras.layers import Embedding,LSTM,Dense,SpatialDropout1D
from keras.initializers import Constant
from sklearn.model_selection import train_test_split
from tensorflow.keras.optimizers import Adam
import os
#os.listdir('../input/glove-global-vectors-for-word-representation/glove.6B.100d.txt')
"""
Explanation: Importing Libraries
End of explanation
"""
tweet= pd.read_csv('./data/train.csv')
test=pd.read_csv('./data/test.csv')
tweet.head(3)
print('There are {} rows and {} columns in train'.format(tweet.shape[0],tweet.shape[1]))
print('There are {} rows and {} columns in train'.format(test.shape[0],test.shape[1]))
"""
Explanation: Load data
End of explanation
"""
x=tweet.target.value_counts()
sns.barplot(x.index,x)
plt.gca().set_ylabel('samples')
"""
Explanation: Class distribution
Before we begin with anything else,let's check the class distribution.There are only two classes 0 and 1.
End of explanation
"""
fig,(ax1,ax2)=plt.subplots(1,2,figsize=(10,5))
tweet_len=tweet[tweet['target']==1]['text'].str.len()
ax1.hist(tweet_len,color='red')
ax1.set_title('disaster tweets')
tweet_len=tweet[tweet['target']==0]['text'].str.len()
ax2.hist(tweet_len,color='green')
ax2.set_title('Not disaster tweets')
fig.suptitle('Characters in tweets')
plt.show()
"""
Explanation: ohh,as expected ! There is a class distribution.There are more tweets with class 0 ( No disaster) than class 1 ( disaster tweets)
Exploratory Data Analysis of tweets
First,we will do very basic analysis,that is character level,word level and sentence level analysis.
Number of characters in tweets
End of explanation
"""
fig,(ax1,ax2)=plt.subplots(1,2,figsize=(10,5))
tweet_len=tweet[tweet['target']==1]['text'].str.split().map(lambda x: len(x))
ax1.hist(tweet_len,color='red')
ax1.set_title('disaster tweets')
tweet_len=tweet[tweet['target']==0]['text'].str.split().map(lambda x: len(x))
ax2.hist(tweet_len,color='green')
ax2.set_title('Not disaster tweets')
fig.suptitle('Words in a tweet')
plt.show()
"""
Explanation: The distribution of both seems to be almost same.120 t0 140 characters in a tweet are the most common among both.
Number of words in a tweet
End of explanation
"""
fig,(ax1,ax2)=plt.subplots(1,2,figsize=(10,5))
word=tweet[tweet['target']==1]['text'].str.split().apply(lambda x : [len(i) for i in x])
sns.distplot(word.map(lambda x: np.mean(x)),ax=ax1,color='red')
ax1.set_title('disaster')
word=tweet[tweet['target']==0]['text'].str.split().apply(lambda x : [len(i) for i in x])
sns.distplot(word.map(lambda x: np.mean(x)),ax=ax2,color='green')
ax2.set_title('Not disaster')
fig.suptitle('Average word length in each tweet')
def create_corpus(target):
corpus=[]
for x in tweet[tweet['target']==target]['text'].str.split():
for i in x:
corpus.append(i)
return corpus
"""
Explanation: Average word length in a tweet
End of explanation
"""
corpus=create_corpus(0)
dic=defaultdict(int)
for word in corpus:
if word in stop:
dic[word]+=1
top=sorted(dic.items(), key=lambda x:x[1],reverse=True)[:10]
x,y=zip(*top)
plt.bar(x,y)
"""
Explanation: Common stopwords in tweets
First we will analyze tweets with class 0.
End of explanation
"""
corpus=create_corpus(1)
dic=defaultdict(int)
for word in corpus:
if word in stop:
dic[word]+=1
top=sorted(dic.items(), key=lambda x:x[1],reverse=True)[:10]
x,y=zip(*top)
plt.bar(x,y)
"""
Explanation: Now,we will analyze tweets with class 1.
End of explanation
"""
plt.figure(figsize=(10,5))
corpus=create_corpus(1)
dic=defaultdict(int)
import string
special = string.punctuation
for i in (corpus):
if i in special:
dic[i]+=1
x,y=zip(*dic.items())
plt.bar(x,y)
"""
Explanation: In both of them,"the" dominates which is followed by "a" in class 0 and "in" in class 1.
Analyzing punctuations.
First let's check tweets indicating real disaster.
End of explanation
"""
plt.figure(figsize=(10,5))
corpus=create_corpus(0)
dic=defaultdict(int)
import string
special = string.punctuation
for i in (corpus):
if i in special:
dic[i]+=1
x,y=zip(*dic.items())
plt.bar(x,y,color='green')
"""
Explanation: Now,we will move on to class 0.
End of explanation
"""
counter=Counter(corpus)
most=counter.most_common()
x=[]
y=[]
for word,count in most[:40]:
if (word not in stop) :
x.append(word)
y.append(count)
sns.barplot(x=y,y=x)
"""
Explanation: Common words ?
End of explanation
"""
def get_top_tweet_bigrams(corpus, n=None):
vec = CountVectorizer(ngram_range=(2, 2)).fit(corpus)
bag_of_words = vec.transform(corpus)
sum_words = bag_of_words.sum(axis=0)
words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)
return words_freq[:n]
plt.figure(figsize=(10,5))
top_tweet_bigrams=get_top_tweet_bigrams(tweet['text'])[:10]
x,y=map(list,zip(*top_tweet_bigrams))
sns.barplot(x=y,y=x)
"""
Explanation: Lot of cleaning needed !
Ngram analysis
we will do a bigram (n=2) analysis over the tweets.Let's check the most common bigrams in tweets.
End of explanation
"""
df=pd.concat([tweet,test])
df.shape
"""
Explanation: We will need lot of cleaning here..
Data Cleaning
As we know,twitter tweets always have to be cleaned before we go onto modelling.So we will do some basic cleaning such as spelling correction,removing punctuations,removing html tags and emojis etc.So let's start.
End of explanation
"""
def remove_URL(text):
url = re.compile(r'https?://\S+|www\.\S+')
return url.sub(r'',text)
df['text']=df['text'].apply(lambda x : remove_URL(x))
"""
Explanation: Removing urls
End of explanation
"""
def remove_html(text):
html=re.compile(r'<.*?>')
return html.sub(r'',text)
df['text']=df['text'].apply(lambda x : remove_html(x))
"""
Explanation: Removing HTML tags
End of explanation
"""
# Reference : https://gist.github.com/slowkow/7a7f61f495e3dbb7e3d767f97bd7304b
def remove_emoji(text):
emoji_pattern = re.compile("["
u"\U0001F600-\U0001F64F" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
u"\U00002702-\U000027B0"
u"\U000024C2-\U0001F251"
"]+", flags=re.UNICODE)
return emoji_pattern.sub(r'', text)
df['text']=df['text'].apply(lambda x: remove_emoji(x))
"""
Explanation: Removing Emojis
End of explanation
"""
def remove_punct(text):
table=str.maketrans('','',string.punctuation)
return text.translate(table)
df['text']=df['text'].apply(lambda x : remove_punct(x))
"""
Explanation: Removing punctuations
End of explanation
"""
def create_corpus(df):
corpus=[]
for tweet in tqdm(df['text']):
words=[word.lower() for word in word_tokenize(tweet) if((word.isalpha()==1) & (word not in stop))]
corpus.append(words)
return corpus
corpus=create_corpus(df)
"""
Explanation: Spelling Correction
Even if I'm not good at spelling I can correct it with python :) I will use pyspellcheker to do that.
Corpus Creation
End of explanation
"""
# download files
import wget
import zipfile
wget.download("http://nlp.stanford.edu/data/glove.6B.zip", './glove.6B.zip')
with zipfile.ZipFile("glove.6B.zip", 'r') as zip_ref:
zip_ref.extractall("./")
"""
Explanation: Download Glove
End of explanation
"""
embedding_dict={}
with open("./glove.6B.100d.txt",'r') as f:
for line in f:
values=line.split()
word=values[0]
vectors=np.asarray(values[1:],'float32')
embedding_dict[word]=vectors
f.close()
MAX_LEN=50
tokenizer_obj=Tokenizer()
tokenizer_obj.fit_on_texts(corpus)
sequences=tokenizer_obj.texts_to_sequences(corpus)
tweet_pad=pad_sequences(sequences,maxlen=MAX_LEN,truncating='post',padding='post')
word_index=tokenizer_obj.word_index
print('Number of unique words:',len(word_index))
num_words=len(word_index)+1
embedding_matrix=np.zeros((num_words,100))
for word,i in tqdm(word_index.items()):
if i > num_words:
continue
emb_vec=embedding_dict.get(word)
if emb_vec is not None:
embedding_matrix[i]=emb_vec
"""
Explanation: Embedding Step
End of explanation
"""
model=Sequential()
embedding=Embedding(num_words,100,embeddings_initializer=Constant(embedding_matrix),
input_length=MAX_LEN,trainable=False)
model.add(embedding)
model.add(SpatialDropout1D(0.2))
model.add(LSTM(64, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))
optimzer=Adam(learning_rate=1e-5)
model.compile(loss='binary_crossentropy',optimizer=optimzer,metrics=['accuracy'])
model.summary()
train=tweet_pad[:tweet.shape[0]]
final_test=tweet_pad[tweet.shape[0]:]
X_train,X_test,y_train,y_test=train_test_split(train,tweet['target'].values,test_size=0.15)
print('Shape of train',X_train.shape)
print("Shape of Validation ",X_test.shape)
"""
Explanation: Baseline Model
End of explanation
"""
history=model.fit(X_train,y_train,batch_size=4,epochs=5,validation_data=(X_test,y_test),verbose=2)
"""
Explanation: Training Model
End of explanation
"""
sample_sub=pd.read_csv('./data/sample_submission.csv')
y_pre=model.predict(final_test)
y_pre=np.round(y_pre).astype(int).reshape(3263)
sub=pd.DataFrame({'id':sample_sub['id'].values.tolist(),'target':y_pre})
sub.to_csv('submission.csv',index=False)
sub.head()
"""
Explanation: Making our submission
End of explanation
"""
|
ctroupin/OceanData_NoteBooks
|
PythonNotebooks/PlatformPlots/plot_CMEMS_mooring.ipynb
|
gpl-3.0
|
%matplotlib inline
import netCDF4
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import colors
from mpl_toolkits.basemap import Basemap
"""
Explanation: The objective of this notebook is to show how to read and plot data from a mooring (time series).
End of explanation
"""
datadir = './datafiles/'
datafile = 'GL_TS_MO_62164.nc'
"""
Explanation: Data reading
The data file is located in the datafiles directory.
End of explanation
"""
with netCDF4.Dataset(datadir + datafile) as nc:
time0 = nc.variables['TIME'][:]
time0_units = nc.variables['TIME'].units
temperature = nc.variables['TEMP'][:]
temperature_units = nc.variables['TEMP'].units
print ('Temperature units = %s' %temperature_units)
"""
Explanation: As the platform is fixed, we will work on time series.<br/>
We will read the time and the sea water temperature variables, as well as their respective units.
End of explanation
"""
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111)
plt.plot(time0, temperature, 'k-')
plt.xlabel(time0_units)
plt.ylabel(temperature_units)
plt.show()
"""
Explanation: Basic plot
For a time series, we simply use the plot function of matplotlib.<br/>
Also, we set the font size to 16:
End of explanation
"""
from netCDF4 import num2date
dates = num2date(time0, units=time0_units)
print dates[:5]
"""
Explanation: The units set for the time is maybe not the easiest to read.<br/>
However the netCDF4 module offers easy solutions to properly convert the time.
Converting time units
NetCDF4 provides the function num2date to convert the time vector into dates.<br/>
http://unidata.github.io/netcdf4-python/#section7
End of explanation
"""
with netCDF4.Dataset(datadir + datafile) as nc:
platform_name = nc.platform_name
"""
Explanation: The dates contains datetime objects.
We also extract the platform name from the file:
End of explanation
"""
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111)
plt.plot(dates, temperature, 'k-')
plt.ylabel(temperature_units)
fig.autofmt_xdate()
plt.title('Temperature at ' + platform_name)
plt.show()
"""
Explanation: Finally, to avoid to have the overlap of the date ticklabels, we use the autofmt_xdate function.<br/>
Everything is in place to create the improved plot.
End of explanation
"""
|
sthuggins/phys202-2015-work
|
assignments/assignment03/NumpyEx02.ipynb
|
mit
|
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
"""
Explanation: Numpy Exercise 2
Imports
End of explanation
"""
def np_fact(n):
"""Compute n! = n*(n-1)*...*1 using Numpy."""
np_fact.arange()
np_fact.cumprod()
assert np_fact(0)==1
assert np_fact(1)==1
assert np_fact(10)==3628800
assert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
"""
Explanation: Factorial
Write a function that computes the factorial of small numbers using np.arange and np.cumprod.
End of explanation
"""
def loop_fact(n):
"""Compute n! using a Python for loop."""
n = int()
fact = 1
for i in range(1,n +1):
fact = fact *i
assert loop_fact(0)==1
assert loop_fact(1)==1
assert loop_fact(10)==3628800
assert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
"""
Explanation: Write a function that computes the factorial of small numbers using a Python loop.
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
"""
Explanation: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is:
python
%timeit -n1 -r1 function_to_time()
End of explanation
"""
|
skkandrach/foundations-homework
|
Homework_8_DIY_Soma.ipynb
|
mit
|
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df= pd.read_excel("NHL 2014-15.xls")
!pip install xlrd
df.columns.value_counts()
"""
Explanation: 2016 NHL Hockey Data Set - Sasha Kandrach
End of explanation
"""
df.head()
"""
Explanation: Here's all of our data:
End of explanation
"""
df.columns
"""
Explanation: Here are each of the columns in the data set:
End of explanation
"""
df['Ctry'].value_counts().head(10)
"""
Explanation: Let's count how many players are from each country:
End of explanation
"""
df['Nat'].value_counts().head(10)
"""
Explanation: Let's count how many players are from each country: they're basically the same but in some cases are slightly different
End of explanation
"""
df['Birth City'].value_counts().head(10)
"""
Explanation: And now let's look at the top ten highest birth cities
End of explanation
"""
df[(df['Birth City'] == 'Toronto') & (df['Draft'] < 2006.0)].head()
"""
Explanation: Let's look at how many of those Toronto-born players were drafted before 2006
End of explanation
"""
df[(df['Birth City'] == 'Edmonton') & (df['Draft'] < 2006.0)].head()
"""
Explanation: Let's look at how many of those Edmonton-born players were drafted before 2006
End of explanation
"""
df[(df['Birth City'] == 'Minneapolis') & (df['Draft'] < 2006.0)].head()
"""
Explanation: Let's look at how many of those Minneapolis-born players were drafted before 2006
End of explanation
"""
usa_concussion = df[(df['Ctry'] == 'USA') & (df['Injury'] == 'Concussion')]
usa_concussion[["First Name", "Last Name"]]
"""
Explanation: Concussions...that's always a fun topic. Let's look at the players from each country that reported a concussion. We'll start with the United States:
End of explanation
"""
usa_mystery_injury = df[(df['Ctry'] == 'USA') & (df['Injury'] == 'Undisclosed')]
usa_mystery_injury[["First Name", "Last Name"]]
us_concussion
"""
Explanation: Hmmm... only two reported concussions in professional hockey?! highly doubtful...let's look at the injuries that were reported as 'Undisclosed' and call them mystery injuries:
End of explanation
"""
can_concussion = df[(df['Ctry'] == 'CAN') & (df['Injury'] == 'Concussion')]
can_concussion[["First Name", "Last Name"]]
"""
Explanation: Let's look at Canada's reported concussions:
End of explanation
"""
can_mystery_injury = df[(df['Ctry'] == 'CAN') & (df['Injury'] == 'Undisclosed')]
can_mystery_injury[["First Name", "Last Name"]]
"""
Explanation: Hmmm...not a lot either. Let's look at the "undisclosed" injuries that were reported:
End of explanation
"""
che_concussion = df[(df['Ctry'] == 'CHE') & (df['Injury'] == 'Concussion')]
che_concussion[["First Name", "Last Name"]]
"""
Explanation: Switzerland Concussions:
End of explanation
"""
che_mystery_injury = df[(df['Ctry'] == 'CHE') & (df['Injury'] == 'Undisclosed')]
che_mystery_injury[["First Name", "Last Name"]]
"""
Explanation: Switzerland "Undisclosed Injuries"
End of explanation
"""
swe_concussion = df[(df['Ctry'] == 'SWE') & (df['Injury'] == 'Concussion')]
swe_concussion[["First Name", "Last Name"]]
"""
Explanation: Sweden Concussions:
End of explanation
"""
swe_mystery_injury = df[(df['Ctry'] == 'SWE') & (df['Injury'] == 'Undisclosed')]
swe_mystery_injury[["First Name", "Last Name"]]
"""
Explanation: Sweden "Undisclosed" Injuries
End of explanation
"""
deu_concussion = df[(df['Ctry'] == 'DEU') & (df['Injury'] == 'Concussion')]
deu_concussion[["First Name", "Last Name"]]
"""
Explanation: Germany Concussions:
End of explanation
"""
deu_mystery_injury = df[(df['Ctry'] == 'DEU') & (df['Injury'] == 'Undisclosed')]
deu_mystery_injury[["First Name", "Last Name"]]
"""
Explanation: Germany "Undisclosed" Injuries:
End of explanation
"""
cze_concussion= df[(df['Ctry'] == 'CZE') & (df['Injury'] == 'Concussion')]
cze_concussion[["First Name", "Last Name"]]
"""
Explanation: Czech Republic Concussions:
End of explanation
"""
cze_mystery_injury = df[(df['Ctry'] == 'CZE') & (df['Injury'] == 'Undisclosed')]
cze_mystery_injury[["First Name", "Last Name"]]
"""
Explanation: Czech Republic "Undisclosed Injuries"
End of explanation
"""
rus_concussion = df[(df['Ctry'] == 'RUS') & (df['Injury'] == 'Concussion')]
rus_concussion[["First Name", "Last Name"]]
"""
Explanation: Russia Concussions:
End of explanation
"""
rus_mystery_injury = df[(df['Ctry'] == 'RUS') & (df['Injury'] == 'Undisclosed')]
rus_mystery_injury[["First Name", "Last Name"]]
"""
Explanation: Russia "Undisclosed Injuries"
End of explanation
"""
ltu_concussion = df[(df['Ctry'] == 'LTU') & (df['Injury'] == 'Concussion')]
ltu_concussion[["First Name", "Last Name"]]
"""
Explanation: Lithuania Concussions
End of explanation
"""
ltu_mystery_injury = df[(df['Ctry'] == 'LTU') & (df['Injury'] == 'Undisclosed')]
ltu_mystery_injury[["First Name", "Last Name"]]
"""
Explanation: Lithuania "Undisclosed Injuries"
End of explanation
"""
nor_concussion = df[(df['Ctry'] == 'NOR') & (df['Injury'] == 'Concussion')]
nor_concussion[["First Name", "Last Name"]]
"""
Explanation: Norway Concussions
End of explanation
"""
nor_mystery_injury = df[(df['Ctry'] == 'NOR') & (df['Injury'] == 'Undisclosed')]
nor_mystery_injury[["First Name", "Last Name"]]
df
"""
Explanation: Norway "Undisclosed" Injuries
End of explanation
"""
birthdate = df[df['DOB']].replace("DOB", "")
birthdate
df['birthyear'] = df['DOB'].astype(str).str.split("'").str.get(1).astype(int)
df
"""
Explanation: Let's look at how old the players are:
End of explanation
"""
young_usa_players = df[(df['Ctry'] == 'USA') & (df['birthyear'] >= 94 )]
young_usa_players[["First Name", "Last Name"]]
"""
Explanation: Young Players (24 years old or younger) for the United States:
End of explanation
"""
young_can_players = df[(df['Ctry'] == 'CAN') & (df['birthyear'] >= 94 )]
young_can_players[["First Name", "Last Name"]]
"""
Explanation: Young Players (24 years old or younger) for Canada:
End of explanation
"""
old_usa_players = df[(df['Ctry'] == 'USA') & (df['birthyear'] <= 80 )]
old_usa_players[["First Name", "Last Name"]]
"""
Explanation: Old Players (36 years old or older) for the United States:
End of explanation
"""
old_can_players = df[(df['Ctry'] == 'CAN') & (df['birthyear'] <= 80 )]
old_can_players[["First Name", "Last Name"]]
"""
Explanation: Old Players (36 years old or younger) for Canada:
End of explanation
"""
df['HT'].describe()
df['Wt'].describe()
"""
Explanation: Let's examine the correlation between height and weight
End of explanation
"""
plt.style.use('ggplot')
df.plot(kind='scatter', x='Wt', y='HT')
"""
Explanation: And a visual of the correlation...nice:
End of explanation
"""
df['S'].value_counts()
df.groupby(['Ctry', 'S']).agg(['count'])
"""
Explanation: Let's examine how many lefty's versus righty's (in shooting) each country has:
End of explanation
"""
usa_left_shot = df[(df['Ctry'] == 'USA') & (df['S'] == 'L')]
usa_left_shot[["First Name", "Last Name"]]
can_left_shot = df[(df['Ctry'] == 'CAN') & (df['S'] == 'L')]
can_left_shot[["First Name", "Last Name"]]
usa_right_shot = df[(df['Ctry'] == 'USA') & (df['S'] == 'R')]
usa_right_shot[["First Name", "Last Name"]]
can_right_shot = df[(df['Ctry'] == 'CAN') & (df['S'] == 'R')]
can_right_shot[["First Name", "Last Name"]]
"""
Explanation: Interesting...Canada has significantly more left-handed shooters (280) than right-handed shooters. Meanwhile, the USA is pretty even with 110 lefty's and 107 righty's.
End of explanation
"""
plt.style.use('seaborn-deep')
df.head(5).plot(kind='bar', x='Ctry', y='Draft')
df
"""
Explanation: Correlation between Country and Draft Year
End of explanation
"""
|
robin-vjc/nsopy
|
notebooks/AnalyticalExample.ipynb
|
mit
|
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib notebook
%cd ..
"""
Explanation: nsopy: basic usage examples
Generally, the inputs required are
a first-order oracle of the problem: for a given $x_k \in \mathbb{X} \subseteq \mathbb{R}^n$, it returns $f(x_k)$ and a valid subgradient $\nabla f(x_k)$,
the projection function $\Pi_{\mathbb{X}}: \mathbb{R}^n \rightarrow \mathbb{R}^n$.
Example 1: simple analytical example
Consider the following problem from [1, Sec. 2.1.3]:
$$
\begin{array}{ll}
\min & f(x_1, x_2) = \left{ \begin{array}{ll}
5(9x_1^2 + 16x_2^2)^{1/2} & \mathrm{if \ } x_1 > \left|x_2\right|, \
9x_1 + 16\left|x_2\right| & \mathrm{if \ } x_1 \leq \left|x_2\right| \
\end{array} \right.\
\mathrm{s.t.} & -3 \leq x_1, x_2 \leq 3.
\end{array}
$$
This problem is interesting because a common gradient algorithm with backtracking initialized anywhere where in the set
$\left{(x_1,x_2) \left| \ x_1 > \left|x_2\right| > (9/16)^2\left|x_1\right| \right. \right}$
fails to converge to the optimum (-3,0), by remaining stuck at (0,0), even though it never touches any point where the function is nondifferentiable, see discussion in [1, Sec. 2.1.3]. Here we test our methods on this problem.
We write the oracle and projection as
End of explanation
"""
def oracle(x):
assert -3 <= x[0] <= 3, 'oracle must be queried within X'
assert -3 <= x[1] <= 3, 'oracle must be queried within X'
# compute function value a subgradient
if x[0] > abs(x[1]):
f_x = 5*(9*x[0]**2 + 16*x[1]**2)**(float(1)/float(2))
diff_f_x = np.array([float(9*5*x[0])/np.sqrt(9*x[0]**2 + 16*x[1]**2),
float(16*5*x[0])/np.sqrt(9*x[0]**2 + 16*x[1]**2)])
else:
f_x = 9*x[0] + 16*abs(x[1])
if x[1] >= 0:
diff_f_x = np.array([9, 16], dtype=float)
else:
diff_f_x = np.array([9, -16], dtype=float)
return 0, -f_x, -diff_f_x # return negation to minimize
def projection_function(x):
# projection on the box is simply saturating the entries
return np.array([min(max(x[0],-3),3), min(max(x[1],-3),3)])
"""
Explanation: Important Note: all methods have been devised to solve the maximization of concave functions. To minimize (as in this case), we just need to negate the oracle's returns, i.e., the objective value $f(x_k)$ and the subgradient $\nabla f(x_k)$.
End of explanation
"""
from nsopy import SGMDoubleSimpleAveraging as DSA
from nsopy import SGMTripleAveraging as TA
from nsopy import SubgradientMethod as SG
from nsopy import UniversalPGM as UPGM
from nsopy import UniversalDGM as UDGM
from nsopy import UniversalFGM as UFGM
from nsopy import GenericDualMethodLogger
# method = DSA(oracle, projection_function, dimension=2, gamma=0.5)
# method = TA(oracle, projection_function, dimension=2, variant=2, gamma=0.5)
# method = SG(oracle, projection_function, dimension=2)
method = UPGM(oracle, projection_function, dimension=2, epsilon=10, averaging=True)
# method = UDGM(oracle, projection_function, dimension=2, epsilon=1.0)
# method = UFGM(oracle, projection_function, dimension=2, epsilon=1.0)
method_logger = GenericDualMethodLogger(method)
# start from an different initial point
x_0 = np.array([2.01,2.01])
method.lambda_hat_k = x_0
for iteration in range(100):
method.dual_step()
"""
Explanation: We can now solve it by applying one of the several methods available:
Note: try to change the method and see for yourself how their trajectories differ!
End of explanation
"""
box = np.linspace(-3, 3, 31)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot_trisurf(np.array([x_1 for x_1 in box for x_2 in box]),
np.array([x_2 for x_1 in box for x_2 in box]),
np.array([-oracle([x_1, x_2])[1] for x_1 in box for x_2 in box]))
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.set_zlabel('$f(x)$')
plt.plot([x[0] for x in method_logger.lambda_k_iterates],
[x[1] for x in method_logger.lambda_k_iterates],
[-f_x for f_x in method_logger.d_k_iterates], 'r.-')
"""
Explanation: And finally plot the result:
End of explanation
"""
|
NekuSakuraba/my_capstone_research
|
subjects/Multivariate t-distribution.ipynb
|
mit
|
from numpy.linalg import inv
import numpy as np
from math import pi, sqrt, gamma
from scipy.stats import t
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: https://stackoverflow.com/questions/29798795/multivariate-student-t-distribution-with-python
https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.stats.t.html
End of explanation
"""
def my_t(x, df):
_ = (df + 1.)/2.
return gamma(_) / (sqrt(pi* df) * gamma(df/2.) * (1. + x**2/df) ** (_))
def my_t(x, df):
_ = lambda x : (df + x)/2.
return gamma(_(1)) / (sqrt(pi* df) * gamma(_(0)) * (1. + x**2/df) ** (_(1)))
my_t(0, 2.74)
rv = t(2.74)
rv.pdf(0)
"""
Explanation: t-distribution
$$
\frac{\Gamma(\frac{\mathbf{v} + 1}{2})}{\sqrt{\mathbf{v} \pi} \times \Gamma(\frac{\mathbf{v}}{2})} (1 + \frac{x^2}{\mathbf{v}})^{-\frac{\mathbf{v} + 1}{2}}
$$
End of explanation
"""
def squared_distance(x, mu, sigma):
diff = (x - mu)
return diff.dot(inv(sigma)).dot(diff.T)
def multivariate(x, mu, sigma, df):
p = x.shape[1]
f = lambda _ : (df+_)/2.
det = np.linalg.det(sigma) ** (-1./2.)
param0 = gamma(f(p))
param1 = (np.pi * df) ** (-p/2.)
param2 = gamma(f(0)) ** -1.
delta = x - mu
param3 = 1. + (delta.dot(inv(sigma)).dot(delta.T))/df
param3 = param3 ** (-f(p))
#return param3
return param0 * det * param1 * param2 * param3
"""
Explanation: Multivariate t-distribution
$$
f(y; \mu, \Sigma, \mathbf{v}) =
\frac{
\Gamma(\frac{\mathbf{v} + p}{2}) |\Sigma|^{-1/2}}
{
(\pi \mathbf{v})^{p/2}
\Gamma(\frac{\mathbf{v}}{2})
(1 + \delta(y, \mu; \Sigma)/\mathbf{v})^{\frac{\mathbf{v}+p}{2}}
}
$$
where
$$
\delta(y, \mu; \Sigma) = (y-\mu)^T \Sigma^{-1} (y-\mu)
$$
End of explanation
"""
np.linalg.det([[1,0],[0,1]]) ** (-1./2.)
x = np.array([1,1])
mu = np.array([3,3])
dec = np.linalg.cholesky([[1,0],[0,1]])
(np.linalg.solve(dec, x - mu) ** 2).sum(axis=0)
multivariate(np.array([[1,1]]), [3,3], [[1,0],[0,1]], 1)
x1, y1 = np.mgrid[-2.5:2.5:.01, -2.5:2.5:.01]
XY = []
for xy in zip(x1, y1):
sample = np.array(xy).T
xy_ = []
for _ in sample:
l = multivariate(_.reshape(1,-1), [.0,.0],[[1.,0.],[0,1.]],100)
xy_.extend(l[0])
XY.append(xy_)
XY = np.array(XY)
print XY.shape
plt.contour(x1, y1, XY)
plt.hlines(1, -2.5, 2.5)
plt.vlines(1, -2.5, 2.5)
plt.show()
"""
Explanation:
End of explanation
"""
x1, y1 = np.mgrid[-2.5:2.5:.01, -2.5:2.5:.01]
XY = []
for xy in zip(x1, y1):
sample = np.array(xy).T
xy_ = []
for _ in sample:
l = multivariate(_.reshape(1,-1), [.0,.0],[[.1,.0],[.0,.2]],100)
xy_.extend(l[0])
XY.append(xy_)
XY = np.array(XY)
print XY.shape
plt.contour(x1, y1, XY)
plt.show()
"""
Explanation:
End of explanation
"""
#written by Enzo Michelangeli, style changes by josef-pktd
# Student's T random variable
def multivariate_t_rvs(m, S, df=np.inf, n=1):
'''generate random variables of multivariate t distribution
Parameters
----------
m : array_like
mean of random variable, length determines dimension of random variable
S : array_like
square array of covariance matrix
df : int or float
degrees of freedom
n : int
number of observations, return random array will be (n, len(m))
Returns
-------
rvs : ndarray, (n, len(m))
each row is an independent draw of a multivariate t distributed
random variable
'''
m = np.asarray(m)
d = len(m)
if df == np.inf:
x = 1.
else:
x = np.random.chisquare(df, n)/df
z = np.random.multivariate_normal(np.zeros(d),S,(n,))
return m + z/np.sqrt(x)[:,None] # same output format as random.multivariate_normal
x1 = multivariate_t_rvs([0,0], [[1,0],[0,1]],9, 300)
x2 = multivariate_t_rvs([1.5,1.5], [[.5,1.],[.1,.7]],9, 300)
plt.scatter(x1[:,0], x1[:,1], alpha=.5)
plt.scatter(x2[:,0], x2[:,1], alpha=.5)
"""
Explanation: https://github.com/statsmodels/statsmodels/blob/master/statsmodels/sandbox/distributions/multivariate.py#L90
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive/supplemental_gradient_boosting/c_boosted_trees_model_understanding.ipynb
|
apache-2.0
|
import time
# We will use some np and pandas for dealing with input data.
import numpy as np
import pandas as pd
# And of course, we need tensorflow.
import tensorflow as tf
from matplotlib import pyplot as plt
from IPython.display import clear_output
tf.__version__
"""
Explanation: Model understanding and interpretability
In this colab, we will
- Will learn how to interpret model results and reason about the features
- Visualize the model results
End of explanation
"""
tf.logging.set_verbosity(tf.logging.ERROR)
tf.set_random_seed(123)
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
# Feature columns.
fcol = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fcol.indicator_column(
fcol.categorical_column_with_vocabulary_list(feature_name,
vocab))
fc = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
fc.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
fc.append(fcol.numeric_column(feature_name,
dtype=tf.float32))
# Input functions.
def make_input_fn(X, y, n_epochs=None):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y))
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = (dataset
.repeat(n_epochs)
.batch(len(y))) # Use entire dataset since this is such a small dataset.
return dataset
return input_fn
# Training and evaluation input functions.
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, n_epochs=1)
"""
Explanation: Below we demonstrate both local and global model interpretability for gradient boosted trees.
Local interpretability refers to an understanding of a model’s predictions at the individual example level, while global interpretability refers to an understanding of the model as a whole.
For local interpretability, we show how to create and visualize per-instance contributions using the technique outlined in Palczewska et al and by Saabas in Interpreting Random Forests (this method is also available in scikit-learn for Random Forests in the treeinterpreter package). To distinguish this from feature importances, we refer to these values as directional feature contributions (DFCs).
For global interpretability we show how to retrieve and visualize gain-based feature importances, permutation feature importances and also show aggregated DFCs.
Setup
Load dataset
We will be using the titanic dataset, where the goal is to predict passenger survival given characteristiscs such as gender, age, class, etc.
End of explanation
"""
params = {
'n_trees': 50,
'max_depth': 3,
'n_batches_per_layer': 1,
# You must enable center_bias = True to get DFCs. This will force the model to
# make an initial prediction before using any features (e.g. use the mean of
# the training labels for regression or log odds for classification when
# using cross entropy loss).
'center_bias': True
}
est = tf.estimator.BoostedTreesClassifier(fc, **params)
# Train model.
est.train(train_input_fn)
# Evaluation.
results = est.evaluate(eval_input_fn)
clear_output()
pd.Series(results).to_frame()
"""
Explanation: Interpret model
Local interpretability
Output directional feature contributions (DFCs) to explain individual predictions, using the approach outlined in Palczewska et al and by Saabas in Interpreting Random Forests. The DFCs are generated with:
pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn))
End of explanation
"""
import matplotlib.pyplot as plt
import seaborn as sns
sns_colors = sns.color_palette('colorblind')
pred_dicts = list(est.experimental_predict_with_explanations(eval_input_fn))
def clean_feature_names(df):
"""Boilerplate code to cleans up feature names -- this is unneed in TF 2.0"""
df.columns = [v.split(':')[0].split('_indi')[0] for v in df.columns.tolist()]
df = df.T.groupby(level=0).sum().T
return df
# Create DFC Pandas dataframe.
labels = y_eval.values
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
df_dfc = pd.DataFrame([pred['dfc'] for pred in pred_dicts])
df_dfc.columns = est._names_for_feature_id
df_dfc = clean_feature_names(df_dfc)
df_dfc.describe()
# Sum of DFCs + bias == probabality.
bias = pred_dicts[0]['bias']
dfc_prob = df_dfc.sum(axis=1) + bias
np.testing.assert_almost_equal(dfc_prob.values,
probs.values)
"""
Explanation: Local interpretability
Next you will output the directional feature contributions (DFCs) to explain individual predictions using the approach outlined in Palczewska et al and by Saabas in Interpreting Random Forests (this method is also available in scikit-learn for Random Forests in the treeinterpreter package). The DFCs are generated with:
pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn))
(Note: The method is named experimental as we may modify the API before dropping the experimental prefix.)
End of explanation
"""
import seaborn as sns # Make plotting nicer.
sns_colors = sns.color_palette('colorblind')
def plot_dfcs(example_id):
label, prob = labels[ID], probs[ID]
example = df_dfc.iloc[ID] # Choose ith example from evaluation set.
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index
ax = example[sorted_ix].plot(kind='barh', color='g', figsize=(10,5))
ax.grid(False, axis='y')
plt.title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, prob, label))
plt.xlabel('Contribution to predicted probability')
ID = 102
plot_dfcs(ID)
"""
Explanation: Plot results
End of explanation
"""
def plot_example_pretty(example):
"""Boilerplate code for better plotting :)"""
def _get_color(value):
"""To make positive DFCs plot green, negative DFCs plot red."""
green, red = sns.color_palette()[2:4]
if value >= 0: return green
return red
def _add_feature_values(feature_values, ax):
"""Display feature's values on left of plot."""
x_coord = ax.get_xlim()[0]
OFFSET = 0.15
for y_coord, (feat_name, feat_val) in enumerate(feature_values.items()):
t = plt.text(x_coord, y_coord - OFFSET, '{}'.format(feat_val), size=12)
t.set_bbox(dict(facecolor='white', alpha=0.5))
from matplotlib.font_manager import FontProperties
font = FontProperties()
font.set_weight('bold')
t = plt.text(x_coord, y_coord + 1 - OFFSET, 'feature\nvalue',
fontproperties=font, size=12)
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index # Sort by magnitude.
example = example[sorted_ix]
colors = example.map(_get_color).tolist()
ax = example.to_frame().plot(kind='barh',
color=[colors],
legend=None,
alpha=0.75,
figsize=(10,6))
ax.grid(False, axis='y')
ax.set_yticklabels(ax.get_yticklabels(), size=14)
_add_feature_values(dfeval.iloc[ID].loc[sorted_ix], ax)
ax.set_title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
ax.set_xlabel('Contribution to predicted probability', size=14)
plt.show()
return ax
# Plot results.
ID = 102
example = df_dfc.iloc[ID] # Choose ith example from evaluation set.
ax = plot_example_pretty(example)
"""
Explanation: ??? How would you explain the above plot in plain english?
Prettier plotting
Color codes based on directionality and adds feature values on figure. Please do not worry about the details of the plotting code :)
End of explanation
"""
features, importances = est.experimental_feature_importances(normalize=True)
df_imp = pd.DataFrame(importances, columns=['importances'], index=features)
# For plotting purposes. This is not needed in TF 2.0.
df_imp = clean_feature_names(df_imp.T).T.sort_values('importances', ascending=False)
# Visualize importances.
N = 8
ax = df_imp.iloc[0:N][::-1]\
.plot(kind='barh',
color=sns_colors[0],
title='Gain feature importances',
figsize=(10, 6))
ax.grid(False, axis='y')
plt.tight_layout()
"""
Explanation: Global feature importances
Gain-based feature importances using est.experimental_feature_importances
Aggregate DFCs using est.experimental_predict_with_explanations
Permutation importances
Gain-based feature importances measure the loss change when splitting on a particular feature, while permutation feature importances are computed by evaluating model performance on the evaluation set by shuffling each feature one-by-one and attributing the change in model performance to the shuffled feature.
In general, permutation feature importance are preferred to gain-based feature importance, though both methods can be unreliable in situations where potential predictor variables vary in their scale of measurement or their number of categories and when features are correlated (source). Check out this article for an in-depth overview and great discussion on different feature importance types.
1. Gain-based feature importances
End of explanation
"""
# Plot.
dfc_mean = df_dfc.abs().mean()
sorted_ix = dfc_mean.abs().sort_values()[-8:].index # Average and sort by absolute.
ax = dfc_mean[sorted_ix].plot(kind='barh',
color=sns_colors[1],
title='Mean |directional feature contributions|',
figsize=(10, 6))
ax.grid(False, axis='y')
"""
Explanation: ??? What does the x axis represent? -- A. It represents relative importance. Specifically, the average reduction in loss that occurs when a split occurs on that feature.
??? Can we completely trust these results and the magnitudes? -- A. The results can be misleading because variables are correlated.
2. Average absolute DFCs
We can also average the absolute values of DFCs to understand impact at a global level.
End of explanation
"""
age = pd.Series(df_dfc.age.values, index=dfeval.age.values).sort_index()
sns.jointplot(age.index.values, age.values);
"""
Explanation: We can also see how DFCs vary as a feature value varies.
End of explanation
"""
from numpy.random import uniform, seed
from matplotlib.mlab import griddata
# Create fake data
seed(0)
npts = 5000
x = uniform(-2, 2, npts)
y = uniform(-2, 2, npts)
z = x*np.exp(-x**2 - y**2)
# Prep data for training.
df = pd.DataFrame({'x': x, 'y': y, 'z': z})
xi = np.linspace(-2.0, 2.0, 200),
yi = np.linspace(-2.1, 2.1, 210),
xi,yi = np.meshgrid(xi, yi)
df_predict = pd.DataFrame({
'x' : xi.flatten(),
'y' : yi.flatten(),
})
predict_shape = xi.shape
def plot_contour(x, y, z, **kwargs):
# Grid the data.
plt.figure(figsize=(10, 8))
# Contour the gridded data, plotting dots at the nonuniform data points.
CS = plt.contour(x, y, z, 15, linewidths=0.5, colors='k')
CS = plt.contourf(x, y, z, 15,
vmax=abs(zi).max(), vmin=-abs(zi).max(), cmap='RdBu_r')
plt.colorbar() # Draw colorbar.
# Plot data points.
plt.xlim(-2, 2)
plt.ylim(-2, 2)
"""
Explanation: Visualizing the model's prediction surface
Lets first simulate/create training data using the following formula:
$z=x* e^{-x^2 - y^2}$
Where $z$ is the dependent variable we are trying to predict and $x$ and $y$ are the features.
End of explanation
"""
zi = griddata(x, y, z, xi, yi, interp='linear')
plot_contour(xi, yi, zi)
plt.scatter(df.x, df.y, marker='.')
plt.title('Contour on training data')
plt.show()
def predict(est):
"""Predictions from a given estimator."""
predict_input_fn = lambda: tf.data.Dataset.from_tensors(dict(df_predict))
preds = np.array([p['predictions'][0] for p in est.predict(predict_input_fn)])
return preds.reshape(predict_shape)
"""
Explanation: We can visualize our function:
End of explanation
"""
fc = [tf.feature_column.numeric_column('x'),
tf.feature_column.numeric_column('y')]
train_input_fn = make_input_fn(df, df.z)
est = tf.estimator.LinearRegressor(fc)
est.train(train_input_fn, max_steps=500);
plot_contour(xi, yi, predict(est))
"""
Explanation: First let's try to fit a linear model to the data.
End of explanation
"""
for n_trees in [1,2,3,10,30,50,100,200]:
est = tf.estimator.BoostedTreesRegressor(fc,
n_batches_per_layer=1,
max_depth=4,
n_trees=n_trees)
est.train(train_input_fn)
plot_contour(xi, yi, predict(est))
plt.text(-1.8, 2.1, '# trees: {}'.format(n_trees), color='w', backgroundcolor='black', size=20)
"""
Explanation: Not very good at all...
??? Why is the linear model not performing well for this problem? Can you think of how to improve it just using a linear model?
Next let's try to fit a GBDT model to it and try to understand what the model does
End of explanation
"""
|
google-research/proteinfer
|
colabs/Random_EC.ipynb
|
apache-2.0
|
%tensorflow_version 1
!git clone https://github.com/google-research/proteinfer
%cd proteinfer
!pip3 install -qr requirements.txt
import pandas as pd
import tensorflow
import inference
import parenthood_lib
import baseline_utils,subprocess
import shlex
import tqdm
import sklearn
import numpy as np
import utils
import colab_evaluation
import plotly.express as px
from plotnine import ggplot, geom_point, geom_ribbon, geom_line, aes, stat_smooth, facet_wrap, xlim,coord_cartesian,theme_bw,labs,ggsave
!wget -qN https://storage.googleapis.com/brain-genomics-public/research/proteins/proteinfer/models/zipped_models/noxpnd_cnn_swissprot_ec_random_swiss-cnn_for_swissprot_ec_random-13685140.tar.gz
!tar xzf noxpnd_cnn_swissprot_ec_random_swiss-cnn_for_swissprot_ec_random-13685140.tar.gz
!wget -qN https://storage.googleapis.com/brain-genomics-public/research/proteins/proteinfer/colab_support/parenthood.json.gz
!wget -qN https://storage.googleapis.com/brain-genomics-public/research/proteins/proteinfer/blast_baseline/fasta_files/SWISSPROT_RANDOM_EC/eval_test.fasta
"""
Explanation: Setup
Get files / dependencies
End of explanation
"""
vocab = inference.Inferrer(
'noxpnd_cnn_swissprot_ec_random_swiss-cnn_for_swissprot_ec_random-13685140'
).get_variable('label_vocab:0').astype(str)
label_normalizer = parenthood_lib.get_applicable_label_dict(
'parenthood.json.gz')
"""
Explanation: Load vocabulary and parenthood information
End of explanation
"""
def download_inference_results(run_name):
file_shard_names = [
'-{:05d}-of-00064.predictions.gz'.format(i) for i in range(64)
]
subprocess.check_output(
shlex.split(f'mkdir -p ./inference_results/{run_name}/'))
for shard_name in tqdm.tqdm(file_shard_names,
position=0,
desc="Downloading"):
subprocess.check_output(
shlex.split(
f'wget https://storage.googleapis.com/brain-genomics-public/research/proteins/proteinfer/swissprot_inference_results/{run_name}/{shard_name} -O ./inference_results/{run_name}/{shard_name}'
))
return
"""
Explanation: Define a helper function to download inference results
End of explanation
"""
min_decision_threshold = 1e-10
download_inference_results(f"ec_random_test")
predictions_df = colab_evaluation.get_normalized_inference_results(
"inference_results/ec_random_test",
vocab,
label_normalizer,
min_decision_threshold=min_decision_threshold)
test_ground_truth = baseline_utils.load_ground_truth('eval_test.fasta')
ground_truth_df = colab_evaluation.make_tidy_df_from_ground_truth(
test_ground_truth)
del test_ground_truth
"""
Explanation: Downloading predictions and getting them ready for analysis
End of explanation
"""
def get_first_level_of_ec_hierarchy(ec):
ec_group_names = {
"EC:1": "Oxidoreductases",
"EC:2": "Transferases",
"EC:3": "Hydrolases",
"EC:4": "Lyases",
"EC:5": "Isomerases",
"EC:6": "Ligases",
"EC:7": "Translocases"
}
return ec_group_names[ec.split(".")[0]]
top_level_ec_grouping = {x: get_first_level_of_ec_hierarchy(x) for x in vocab}
colab_evaluation.apply_threshold_and_return_stats(
predictions_df, ground_truth_df, grouping=top_level_ec_grouping)
ec_pr_curves = colab_evaluation.get_pr_curve_df(
predictions_df, ground_truth_df, grouping=top_level_ec_grouping)
fig = px.line(ec_pr_curves,
x="recall", y="precision", color="group")
fig.update_layout(template="plotly_white", title="Performance by EC class")
fig.update_yaxes(range=(0.92, 1))
"""
Explanation: Analysis
Now we can get some statistics about our predictions. Let's start with a simple calculation of precision, recall and F1 for the whole dataset at a threshold of 0.5.
What happens in different EC classes - is there differential performance?
End of explanation
"""
def get_level_of_hierarchy(ec):
num_of_dashes = ec.count("-")
level_of_hierarchy = 4 - num_of_dashes
return level_of_hierarchy
level_of_hierachy_grouping = {x: get_level_of_hierarchy(x) for x in vocab}
level_data = colab_evaluation.apply_threshold_and_return_stats(
predictions_df, ground_truth_df, grouping=level_of_hierachy_grouping)
ggplot(level_data, aes(x="group", y="f1")) + geom_point() + geom_point(
) + geom_line() + theme_bw() + labs(
x="Level of hierarchy", y="F1 score") + coord_cartesian(ylim=[0.95, 0.99])
"""
Explanation: And what about at different levels of the EC hierarchy?
End of explanation
"""
cnn_pr_data = colab_evaluation.get_pr_curve_df(predictions_df, ground_truth_df)
ggplot(cnn_pr_data.drop(index=0),
aes(x="recall", y="precision",
color="f1")) + geom_line() + geom_line() + coord_cartesian(
xlim=(0.96, 1)) + theme_bw() + labs(
x="Recall", y="Precision", color="F1 Score")
"""
Explanation: Now let's try varying the threshold to generate a precision-recall curve.
End of explanation
"""
cnn_pr_data.sort_values('f1', ascending=False)[:3]
"""
Explanation: What decision threshold maximises F1 score?
End of explanation
"""
min_decision_threshold = 1e-10
download_inference_results(f"ec_random_test_ens")
ens_predictions_df = colab_evaluation.get_normalized_inference_results(
"inference_results/ec_random_test_ens",
vocab,
label_normalizer,
min_decision_threshold=min_decision_threshold)
ens_cnn_pr_data = colab_evaluation.get_pr_curve_df(ens_predictions_df,
ground_truth_df)
ens_cnn_pr_data.sort_values('f1', ascending=False)[0:3]
cnn_pr_data['method'] = "CNN"
ens_cnn_pr_data['method'] = "CNN Ensemble"
method_comparison = pd.concat([cnn_pr_data, ens_cnn_pr_data],
ignore_index=True)
ggplot(method_comparison,
aes(x="recall", y="precision", color="method",
linetype="method")) + geom_line() + coord_cartesian(
xlim=(0.91, 1), ylim=(0.91, 1)) + theme_bw() + labs(
x="Recall", y="Precision", color="Method")
"""
Explanation: Now let's have a look at PR curves for each different top level group.
Load CNN ensemble predictions
End of explanation
"""
!wget -qN https://storage.googleapis.com/brain-genomics-public/research/proteins/proteinfer/blast_baseline/blast_output/random/blast_out_test.tsv
!wget -qN https://storage.googleapis.com/brain-genomics-public/research/proteins/proteinfer/blast_baseline/fasta_files/SWISSPROT_RANDOM_EC/eval_test.fasta
!wget -qN https://storage.googleapis.com/brain-genomics-public/research/proteins/proteinfer/blast_baseline/fasta_files/SWISSPROT_RANDOM_EC/train.fasta
train_ground_truth = colab_evaluation.make_tidy_df_from_ground_truth(baseline_utils.load_ground_truth('train.fasta')).rename(columns={"up_id":"train_seq_id"}).drop(columns=["gt"])
blast_out = colab_evaluation.read_blast_table("blast_out_test.tsv")
blast_df = blast_out.merge(train_ground_truth,
left_on="target",
right_on="train_seq_id")
blast_df.rename(columns={'bit_score': 'value', "query": "up_id"}, inplace=True)
min_decision_threshold = 0
blast_pr_data = colab_evaluation.get_pr_curve_df(blast_df, ground_truth_df)
blast_pr_data['method'] = 'BLAST'
cnn_pr_data['method'] = 'CNN'
ens_cnn_pr_data['method'] = 'Ensembled CNN'
method_comparison = pd.concat([
cnn_pr_data.drop(index=0),
ens_cnn_pr_data.drop(index=0),
blast_pr_data.drop(index=0)
],
ignore_index=True)
ggplot(method_comparison,
aes(x="recall", y="precision",
color="method")) + geom_line() + coord_cartesian(
xlim=(0.90, 1), ylim=(0.90, 1)) + theme_bw() + labs(
x="Recall", y="Precision", color="Method")
method_comparison.groupby("method")[['f1']].agg(max)
method_comparison.sort_values('f1',
ascending=False).drop_duplicates(['method'])
"""
Explanation: Blast comparison
Let's do the same sort of analysis for a BLAST baseline.
End of explanation
"""
def get_x_where_y_is_closest_to_z(df, x, y, z):
return df.iloc[(df[y] - z).abs().argsort()[:1]][x]
cnn_threshold = float(
get_x_where_y_is_closest_to_z(cnn_pr_data,
x="threshold",
y="recall",
z=0.96))
blast_threshold = float(
get_x_where_y_is_closest_to_z(blast_pr_data,
x="threshold",
y="recall",
z=0.96))
cnn_results = colab_evaluation.assign_tp_fp_fn(ens_predictions_df,
ground_truth_df, cnn_threshold)
blast_results = colab_evaluation.assign_tp_fp_fn(blast_df, ground_truth_df,
blast_threshold)
merged = cnn_results.merge(blast_results,
how="outer",
suffixes=("_ens_cnn", "_blast"),
left_on=["label", "up_id", "gt"],
right_on=["label", "up_id", "gt"])
blast_info = blast_out[['up_id', 'target', 'pc_identity']]
"""
Explanation: Let's investigate what's going on at the left hand side of the graph where the CNN and ensemble achieve greater precision than BLAST.
End of explanation
"""
merged.query("fp_blast==True and fp_ens_cnn==False").head()
"""
Explanation: Let's list some of the BLAST false-positives in case we want to investigate what's going on.
End of explanation
"""
blast_and_cnn_ensemble = ens_predictions_df.merge(blast_df,
how="outer",
suffixes=("_ens_cnn",
"_blast"),
left_on=["label", "up_id"],
right_on=["label", "up_id"])
blast_and_cnn_ensemble = blast_and_cnn_ensemble.fillna(False)
"""
Explanation: An ensemble of BLAST and ensembled-CNNs
We've seen that the CNN-ensemble and BLAST have different strengths - at lower recalls the CNN appears to have greater precision than BLAST at lower recalls, but BLAST has better recall at lower precisions. Can we combine these approaches to get a predictor with the best of both worlds?
End of explanation
"""
blast_and_cnn_ensemble['value'] = blast_and_cnn_ensemble[
'value_ens_cnn'] * blast_and_cnn_ensemble['value_blast']
blast_and_cnn_ensemble_pr = colab_evaluation.get_pr_curve_df(
blast_and_cnn_ensemble, ground_truth_df)
blast_and_cnn_ensemble_pr.f1.max()
blast_and_cnn_ensemble_pr['method'] = 'Ensemble of BLAST with Ensembled-CNN'
cnn_pr_data['method'] = 'CNN'
ens_cnn_pr_data['method'] = 'Ensembled CNN'
method_comparison = pd.concat([
cnn_pr_data.drop(index=0),
ens_cnn_pr_data.drop(index=0),
blast_pr_data.drop(index=0),
blast_and_cnn_ensemble_pr.drop(index=0)
],
ignore_index=True)
ggplot(method_comparison,
aes(x="recall", y="precision",
color="method")) + geom_line() + coord_cartesian(
xlim=(0.93, 1), ylim=(0.93, 1)) + theme_bw() + labs(
x="Recall", y="Precision", color="Method")
method_comparison = method_comparison.query("recall!=1.0")
fig = px.line(method_comparison, x="recall", y="precision", color="method")
fig.update_layout(template="plotly_white", title="Precision-recall by method")
fig.update_xaxes(range=(0.95, 1))
fig.update_yaxes(range=(0.95, 1))
fig.show()
json = fig.to_json(pretty=True)
with open("method.json", "w") as f:
f.write(json)
method_comparison.groupby("method")[['f1']].agg(max)
"""
Explanation: We will create a simple ensemble where the value of the predictor is simply the multiple of the probability assigned by the ensemble of neural networks and the bit-score linking this sequence to to an example with this label by BLAST.
End of explanation
"""
import collections
def get_bootstrapped_pr_curves(predictions_df,
ground_truth_df,
grouping=None,
n=100,
method_label=None,
sample_with_replacement=True):
joined = predictions_df[predictions_df.value > 1e-10].merge(
ground_truth_df, on=['up_id', 'label'], how='outer')
unique_up_ids = joined['up_id'].unique()
pr_samples = []
for _ in tqdm.tqdm(range(n)):
sampled_up_ids = np.random.choice(unique_up_ids, len(unique_up_ids),
sample_with_replacement)
count_by_sample = collections.Counter(sampled_up_ids)
count_by_sample_ordered = [count_by_sample[x] for x in joined.up_id]
joined_sampled = pd.DataFrame(np.repeat(joined.values,
count_by_sample_ordered,
axis=0),
columns=joined.columns)
unique_suffixes_counter = collections.defaultdict(lambda: 0)
unique_suffixes = []
for row in joined_sampled.values:
lookup_key = (row[0], row[1])
unique_suffixes.append(unique_suffixes_counter[lookup_key])
unique_suffixes_counter[lookup_key] += 1
joined_sampled['up_id'] = [
f'{x}-{y}' for x, y in zip(joined_sampled.up_id, unique_suffixes)
]
pred = joined_sampled[joined_sampled['value'].notna()][[
'up_id', 'label', 'value'
]]
gt = joined_sampled[joined_sampled['gt'].notna()][[
'up_id', 'label', 'gt'
]]
pr_curves = colab_evaluation.get_pr_curve_df(pred,
gt,
grouping=grouping)
pr_curves.loc[pr_curves['threshold'] == 0.0, 'precision'] = 0
pr_curves.loc[pr_curves['threshold'] == 0.0, 'f1'] = 0
pr_curves['type'] = method_label
pr_samples.append(pr_curves)
return pr_samples
"""
Explanation: Bootstrapping
Defining functions
End of explanation
"""
n = 100
non_ensembled_prs = get_bootstrapped_pr_curves(predictions_df,
ground_truth_df,
n=n,
method_label="Single CNN")
ensembled_prs = get_bootstrapped_pr_curves(ens_predictions_df,
ground_truth_df,
n=n,
method_label="Ensemble of CNNs")
blast_prs = get_bootstrapped_pr_curves(blast_df,
ground_truth_df,
n=n,
method_label="BLAST")
blast_and_cnn_ensemble_prs = get_bootstrapped_pr_curves(
blast_and_cnn_ensemble,
ground_truth_df,
n=n,
method_label="Blast/CNN-Ensemble")
"""
Explanation: Perform calculations
End of explanation
"""
from scipy.interpolate import interp1d
def create_interpolated_df(single_curve):
interp_recall_fn = interp1d(single_curve.recall,
single_curve.precision,
bounds_error=False)
recall = np.linspace(0.0, 1, 5001)
interpolated_precisions = interp_recall_fn(recall)
return pd.DataFrame({
"type": single_curve.type.to_list()[0],
"group": single_curve.group.to_list()[0],
"precision": interpolated_precisions,
"recall": recall
})
curves = [
ensembled_prs, non_ensembled_prs, blast_and_cnn_ensemble_prs, blast_prs
]
dfs = []
for curve_set in curves:
for c2 in curve_set:
for group_name, df_group in c2.groupby("group"):
dfs.append(create_interpolated_df(df_group.query("precision>0")))
all = pd.concat(dfs)
curves = [
ensembled_prs, non_ensembled_prs, blast_and_cnn_ensemble_prs, blast_prs
]
dfs = []
def create_f1(single_curve):
return pd.DataFrame(
{
"type": single_curve.type.to_list()[0],
"group": single_curve.group.to_list()[0],
"f1": single_curve.f1.max()
},
index=[0])
for curve_set in curves:
for c2 in curve_set:
for group_name, df_group in c2.groupby("group"):
dfs.append(create_f1(df_group))
f1 = pd.concat(dfs)
def lower_func(x):
return x.quantile(0.025)
def upper_func(x):
return x.quantile(0.975)
f1_data = f1.groupby(['type',
'group']).agg(lower=("f1", lower_func),
upper=("f1", upper_func)).reset_index()
f1_data
f1
def lower_func(x):
return x.quantile(0.025)
def upper_func(x):
return x.quantile(0.975)
for_graph = all.groupby(['type', 'group', 'recall'
]).agg(lower=("precision", lower_func),
upper=("precision", upper_func)).reset_index()
a = get_bootstrapped_pr_curves(predictions_df,
ground_truth_df,
n=1,
sample_with_replacement=False,
method_label="Single CNN")[0]
b = get_bootstrapped_pr_curves(ens_predictions_df,
ground_truth_df,
n=1,
sample_with_replacement=False,
method_label="Ensemble of CNNs")[0]
c = get_bootstrapped_pr_curves(blast_df,
ground_truth_df,
n=1,
sample_with_replacement=False,
method_label="BLAST")[0]
d = get_bootstrapped_pr_curves(blast_and_cnn_ensemble,
ground_truth_df,
n=1,
sample_with_replacement=False,
method_label="Blast/CNN-Ensemble")[0]
all_single = pd.concat([a, b, c, d]).query("precision>0")
"""
Explanation: Interpolate curves
End of explanation
"""
import plotly.graph_objects as go
fig = go.Figure()
def get_color(index, transparent):
colors = {
'Single CNN': [150, 0, 0],
'Ensemble of CNNs': [0, 125, 125],
'Blast/CNN-Ensemble': [0, 200, 0],
'BLAST': [125, 0, 255]
}
transparency = 0.2 if transparent else 1
return f"rgba({colors[index][0]}, {colors[index][1]}, {colors[index][2]}, {transparency})"
for the_type, new in for_graph.groupby('type'):
fig.add_trace(
go.Scatter(
x=new['recall'],
y=new['upper'],
mode='lines',
showlegend=False,
line=dict(width=0.0, color=get_color(the_type, False)),
name="",
hoverinfo='skip',
))
fig.add_trace(
go.Scatter(
x=new['recall'],
y=new['lower'],
name=the_type,
hoverinfo='skip',
showlegend=False,
line=dict(width=0.0, color=get_color(the_type, False)),
fill='tonexty',
fillcolor=get_color(the_type, True),
))
for the_type, new in all_single.groupby('type'):
fig.add_trace(
go.Scatter(x=new['recall'],
y=new['precision'],
name=the_type,
line=dict(width=1, color=get_color(the_type, False))))
fig.update_xaxes(title="Recall", range=[0.90, 1])
fig.update_yaxes(title="Precision", range=[0.90, 1])
fig.update_layout(template="plotly_white")
fig.update_layout(legend_title_text='Method')
fig.update_layout(title="Precision and recall by method", )
fig.show()
fig=ggplot(all_single,
aes(x="recall", y="precision",
color="type")) + geom_line() + coord_cartesian(
xlim=(0.9, 1), ylim=(0.9, 1)) + theme_bw() + labs(
x="Recall", y="Precision", color="Method",fill="Method") +geom_ribbon(aes(ymin="lower",ymax="upper",y="lower",fill="type"),data=for_graph,color=None,alpha=0.25)
fig
ggsave(fig,"random_truncated.pdf")
fig=ggplot(all_single,
aes(x="recall", y="precision",
color="type")) + geom_line() + coord_cartesian(
xlim=(0.0, 1), ylim=(0.0, 1)) + theme_bw() + labs(
x="Recall", y="Precision", color="Method",fill="Method") +geom_ribbon(aes(ymin="lower",ymax="upper",y="lower",fill="type"),data=for_graph,color=None,alpha=0.25)
fig
ggsave(fig,"random_untruncated.pdf")
curves = [
ensembled_prs, non_ensembled_prs, blast_and_cnn_ensemble_prs, blast_prs
]
dfs = []
def create_f1(single_curve):
return pd.DataFrame(
{
"type": single_curve.type.to_list()[0],
"group": single_curve.group.to_list()[0],
"f1": single_curve.f1.max()
},
index=[0])
for curve_set in curves:
for c2 in curve_set:
for group_name, df_group in c2.groupby("group"):
dfs.append(create_f1(df_group.query("precision>0")))
f1 = pd.concat(dfs)
def lower_func(x):
return x.quantile(0.025)
def upper_func(x):
return x.quantile(0.975)
f1_data = f1.groupby(['type',
'group']).agg(lower=("f1", lower_func),
upper=("f1", upper_func)).reset_index()
f1_data
"""
Explanation: Plot bootstrap curves
End of explanation
"""
def resample_with_replacement(df):
indices = np.random.randint(0, df.shape[0], df.shape[0])
return df.iloc[indices, :]
def bootstrap(df, n=100):
resampled_results = []
for x in tqdm.tqdm(range(n), position=0):
resampled = resample_with_replacement(df)
data = colab_evaluation.stats_by_group(resampled.groupby('count_cut'))
resampled_results.append(data)
return pd.concat(resampled_results)
train_counts = train_ground_truth.groupby(
"label", as_index=False).count().rename(columns={"train_seq_id": "count"})
both = colab_evaluation.assign_tp_fp_fn(predictions_df, ground_truth_df,
0.625205)
both = both.merge(train_counts, left_on="label", right_on="label", how="outer")
both.fillna(0)
both['count_cut'] = pd.cut(both['count'],
bins=(0, 5, 10, 20, 40, 100, 1000, 500000))
bootstrapped_data = bootstrap(both, n=5)
bootstrapped_data['count_cut_str'] = bootstrapped_data['count_cut'].astype(str)
bootstrapped_data['type'] = "CNN"
both = colab_evaluation.assign_tp_fp_fn(blast_df, ground_truth_df, 60.5)
both = both.merge(train_counts, left_on="label", right_on="label", how="outer")
both.fillna(0)
both['count_cut'] = pd.cut(both['count'],
bins=(0, 5, 10, 20, 40, 100, 1000, 500000))
bootstrapped_data_blast = bootstrap(both, n=100)
bootstrapped_data_blast['type'] = "BLAST"
both = colab_evaluation.assign_tp_fp_fn(ens_predictions_df, ground_truth_df,
0.25)
both = both.merge(train_counts, left_on="label", right_on="label", how="outer")
both.fillna(0)
both['count_cut'] = pd.cut(both['count'],
bins=(0, 5, 10, 20, 40, 100, 1000, 500000))
bootstrapped_data_ens = bootstrap(both, n=100)
bootstrapped_data_ens['type'] = "Ensembled CNNs"
both = colab_evaluation.assign_tp_fp_fn(blast_and_cnn_ensemble,
ground_truth_df, 0.17)
both = both.merge(train_counts, left_on="label", right_on="label", how="outer")
both.fillna(0)
both['count_cut'] = pd.cut(both['count'],
bins=(0, 5, 10, 20, 40, 100, 1000, 500000))
bootstrapped_data_combo = bootstrap(both, n=100)
bootstrapped_data_combo['type'] = "Ensembled CNNs with BLAST"
bootstrapped_merge = pd.concat([
bootstrapped_data_blast, bootstrapped_data, bootstrapped_data_ens,
bootstrapped_data_combo
],
ignore_index=True)
bootstrapped_merge['count_cut_str'] = bootstrapped_merge['count_cut'].astype(
str)
fig = px.box(bootstrapped_merge,
width=700,
color="type",
x="count_cut_str",
y="f1",
labels={
"count_cut_str": "Number of training examples per label",
"f1": "F1"
},
template="simple_white")
fig.show()
"""
Explanation: Examine effect of number of training examples on performance
End of explanation
"""
|
Xero-Hige/Notebooks
|
Algoritmos I/2018-1C/Parcialito_1_Resolucion_Propuesta.ipynb
|
gpl-3.0
|
def mi_otra_funcion(inicio,final):
for i in range(inicio,final+1,2):
for j in range(inicio,final+1,2):
print(i,end=" ")
print()
mi_otra_funcion(3,7)
"""
Explanation: Parcialito 1 (Solucion propuesta)
Ejercicio 1
Enunciado
1) Dada la siguiente función:
``` python
def mi_funcion(p,q):
contador_1 = contador_2 = p
while True:
if contador_2 > q:
contador_1 += 2
contador_2 = p
print()
if contador_1 > q:
break
print(contador_1 , end=" ")
contador_2 += 2
``
A) Mostrar la salida de ejecutar 'my_function' con p=3 q=7
B) Proponga un mejor nombre para los parámetrospyqC) Reescribir la función anterior utilizando solo ciclosfor`
Resolucion
Item A
``` bash
my_function(3,7)
3 3 3
5 5 5
7 7 7
```
Item B
p = inicio ; q = final
Item C
End of explanation
"""
def es_crucigrama_valido(crucigrama):
'''Recibe un crucigrama y devuelve si esta o no correctamente llenado.
Un crucigrama no está correctamente llenado si:
- Hay al menos una celda BLANCA vacía
- Hay al menos una celda NEGRA llena'''
for fila in crucigrama:
for celda in fila:
color,contenido = celda
if color == BLANCO and not contenido:
return False
if color == NEGRO and contenido:
return False
return True
"""
Explanation: Ejercicio 2
Enunciado
2) Un crucigrama es una matriz de nxm que contiene celdas. Las celdas son tuplas de dos elementos de la forma (<Color>,<contenido>). Cada celda puede ser BLANCA o NEGRA. El contenido es una cadena, si está vacía, la celda está vacía. Un crucigrama no está correctamente llenado si:
* Hay una celda BLANCA vacía
* Hay una celda NEGRA llena
Escribir una función que dado un crucigrama, devuelva si está correctamente llenado.
Resolucion
End of explanation
"""
from string import ascii_lowercase
def encriptar(cadena):
encriptado = []
for c in cadena:
if not c in ascii_lowercase:
return ""
i = ascii_lowercase.index(c) + 13
i = i % len(ascii_lowercase)
encriptado.append(ascii_lowercase[i])
return "".join(encriptado)
print(encriptar("simonga"))
print(encriptar("fvzbatn"))
print(encriptar("Zambia"))
"""
Explanation: Ejercicio 3
Enunciado
3) Escribir una función que reciba una cadena y devuelva su encriptación en formato rot13. Para encriptar una cadena con rot13 se debe reemplazar cada caracter por el caracter que se encuentra a 13 posiciones en el abecedario.
Si la cadena contiene números, caracteres especiales o mayúsculas se debe devolver una cadena vacía.
Ayuda: usar la constante ascii_lowercase del módulo string que contiene “abcd...xyz”
Ej:
python
rot13(“zambia”) -> “mnzovn”
rot13(“mnzovn”) -> “zambia”
rot13(“z4mbi4”) -> “”
Resolucion
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive/05_review/labs/5_train.ipynb
|
apache-2.0
|
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud AI Platform
TFVERSION = "1.14" # TF version for CAIP to use
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
"""
Explanation: Training on Cloud AI Platform
Learning Objectives
- Use CAIP to run a distributed training job
Introduction
After having testing our training pipeline both locally and in the cloud on a susbset of the data, we can submit another (much larger) training job to the cloud. It is also a good idea to run a hyperparameter tuning job to make sure we have optimized the hyperparameters of our model.
This notebook illustrates how to do distributed training and hyperparameter tuning on Cloud AI Platform.
To start, we'll set up our environment variables as before.
End of explanation
"""
%%bash
if ! gsutil ls -r gs://$BUCKET | grep -q gs://$BUCKET/babyweight/preproc; then
gsutil mb -l ${REGION} gs://${BUCKET}
# copy canonical set of preprocessed files if you didn't do previous notebook
gsutil -m cp -R gs://cloud-training-demos/babyweight gs://${BUCKET}
fi
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
"""
Explanation: Next, we'll look for the preprocessed data for the babyweight model and copy it over if it's not there.
End of explanation
"""
%%bash
touch babyweight/trainer/__init__.py
"""
Explanation: In the previous labs we developed our TensorFlow model and got it working on a subset of the data. Now we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
Train on Cloud AI Platform
Training on Cloud AI Platform requires two things:
- Configuring our code as a Python package
- Using gcloud to submit the training code to Cloud AI Platform
Move code into a Python package
A Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices.
The bash command touch creates an empty file in the specified location, the directory babyweight should already exist.
End of explanation
"""
%%writefile babyweight/trainer/task.py
import argparse
import json
import os
import tensorflow as tf
from . import model
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--bucket",
help="GCS path to data. We assume that data is in \
gs://BUCKET/babyweight/preproc/",
required=True
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
required=True
)
parser.add_argument(
"--batch_size",
help="Number of examples to compute gradient over.",
type=int,
default=512
)
parser.add_argument(
"--job-dir",
help="this model ignores this field, but it is required by gcloud",
default="junk"
)
# TODO: Your code goes here
# TODO: Your code goes here
# TODO: Your code goes here
# TODO: Your code goes here
# TODO: Your code goes here
# Parse arguments
args = parser.parse_args()
arguments = args.__dict__
# Pop unnecessary args needed for gcloud
arguments.pop("job-dir", None)
# Assign the arguments to the model variables
output_dir = arguments.pop("output_dir")
model.BUCKET = arguments.pop("bucket")
model.BATCH_SIZE = arguments.pop("batch_size")
model.TRAIN_STEPS = (
arguments.pop("train_examples") * 1000) / model.BATCH_SIZE
model.EVAL_STEPS = arguments.pop("eval_steps")
print ("Will train for {} steps using batch_size={}".format(
model.TRAIN_STEPS, model.BATCH_SIZE))
model.PATTERN = arguments.pop("pattern")
model.NEMBEDS = arguments.pop("nembeds")
model.NNSIZE = arguments.pop("nnsize")
print ("Will use DNN size of {}".format(model.NNSIZE))
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
output_dir = os.path.join(
output_dir,
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
# Run the training job
model.train_and_evaluate(output_dir)
"""
Explanation: We then use the %%writefile magic to write the contents of the cell below to a file called task.py in the babyweight/trainer folder.
Exercise 1
The cell below write the file babyweight/trainer/task.py which sets up our training job. Here is where we determine which parameters of our model to pass as flags during training using the parser module. Look at how batch_size is passed to the model in the code below. Use this as an example to parse arguements for the following variables
- nnsize which represents the hidden layer sizes to use for DNN feature columns
- nembeds which represents the embedding size of a cross of n key real-valued parameters
- train_examples which represents the number of examples (in thousands) to run the training job
- eval_steps which represents the positive number of steps for which to evaluate model
- pattern which specifies a pattern that has to be in input files. For example '00001-of' would process only one shard. For this variable, set 'of' to be the default.
Be sure to include a default value for the parsed arguments above and specfy the type if necessary.
End of explanation
"""
%%writefile babyweight/trainer/model.py
import shutil
import numpy as np
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
BUCKET = None # set from task.py
PATTERN = "of" # gets all files
# Determine CSV and label columns
# TODO: Your code goes here
# Set default values for each CSV column
# TODO: Your code goes here
# Define some hyperparameters
TRAIN_STEPS = 10000
EVAL_STEPS = None
BATCH_SIZE = 512
NEMBEDS = 3
NNSIZE = [64, 16, 4]
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(prefix, mode, batch_size):
def _input_fn():
def decode_csv(value_column):
# TODO: Your code goes here
# Use prefix to create file path
file_path = "gs://{}/babyweight/preproc/{}*{}*".format(
BUCKET, prefix, PATTERN)
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename=file_path)
# Create dataset from file list
# TODO: Your code goes here
# In training mode, shuffle the dataset and repeat indefinitely
# TODO: Your code goes here
dataset = # TODO: Your code goes here
# This will now return batches of features, label
return dataset
return _input_fn
# Define feature columns
def get_wide_deep():
# TODO: Your code goes here
return wide, deep
# Create serving input function to be able to serve predictions later using provided inputs
def serving_input_fn():
# TODO: Your code goes here
return tf.estimator.export.ServingInputReceiver(
features=features, receiver_tensors=feature_placeholders)
# create metric for hyperparameter tuning
def my_rmse(labels, predictions):
pred_values = predictions["predictions"]
return {"rmse": tf.metrics.root_mean_squared_error(
labels=labels, predictions=pred_values)}
# Create estimator to train and evaluate
def train_and_evaluate(output_dir):
# TODO: Your code goes here
"""
Explanation: In the same way we can write to the file model.py the model that we developed in the previous notebooks.
Exercise 2
Complete the TODOs in the code cell below to create out model.py. We'll use the code we wrote for the Wide & Deep model. Look back at your 3_tensorflow_wide_deep notebook and copy/paste the necessary code from that notebook into its place in the cell below.
End of explanation
"""
%%bash
echo "bucket=${BUCKET}"
rm -rf babyweight_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
python -m trainer.task \
--bucket= # TODO: Your code goes here
--output_dir= # TODO: Your code goes here
--job-dir=./tmp \
--pattern= # TODO: Your code goes here
--train_examples= # TODO: Your code goes here
--eval_steps= # TODO: Your code goes here
"""
Explanation: Train locally
After moving the code to a package, make sure it works as a standalone. Note, we incorporated the --pattern and --train_examples flags so that we don't try to train on the entire dataset while we are developing our pipeline. Once we are sure that everything is working on a subset, we can change the pattern so that we can train on all the data. Even for this subset, this takes about 3 minutes in which you won't see any output ...
Exercise 3
Fill in the missing code in the TODOs below so that we can run a very small training job over a single file (i.e. use the pattern equal to "00000-of-") with 1 train step and 1 eval step
End of explanation
"""
%%writefile inputs.json
{"is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
"""
Explanation: Making predictions
The JSON below represents an input into your prediction model. Write the input.json file below with the next cell, then run the prediction locally to assess whether it produces predictions correctly.
End of explanation
"""
%%bash
MODEL_LOCATION=$(ls -d $(pwd)/babyweight_trained/export/exporter/* | tail -1)
echo $MODEL_LOCATION
gcloud ai-platform local predict # TODO: Your code goes here
"""
Explanation: Exercise 4
Finish the code in cell below to run a local prediction job on the inputs.json file we just created. You will need to provide two additional flags
- one for model-dir specifying the location of the model binaries
- one for json-instances specifying the location of the json file on which you want to predict
End of explanation
"""
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region= # TODO: Your code goes here
--module-name= # TODO: Your code goes here
--package-path= # TODO: Your code goes here
--job-dir= # TODO: Your code goes here
--staging-bucket=gs://$BUCKET \
--scale-tier= #TODO: Your code goes here
--runtime-version= #TODO: Your code goes here
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=200000
"""
Explanation: Training on the Cloud with CAIP
Once the code works in standalone mode, you can run it on Cloud AI Platform. Because this is on the entire dataset, it will take a while. The training run took about <b> an hour </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section.
Exercise 5
Look at the TODOs in the code cell below and fill in the missing information. Some of the required flags are already there for you. You will need to provide the rest.
End of explanation
"""
%writefile hyperparam.yaml
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
hyperparameterMetricTag: rmse
goal: MINIMIZE
maxTrials: 20
maxParallelTrials: 5
enableTrialEarlyStopping: True
params:
- parameterName: batch_size
type: # TODO: Your code goes here
minValue: # TODO: Your code goes here
maxValue: # TODO: Your code goes here
scaleType: # TODO: Your code goes here
- parameterName: nembeds
type: # TODO: Your code goes here
minValue: # TODO: Your code goes here
maxValue: # TODO: Your code goes here
scaleType: # TODO: Your code goes here
- parameterName: nnsize
type: # TODO: Your code goes here
minValue: # TODO: Your code goes here
maxValue: # TODO: Your code goes here
scaleType: # TODO: Your code goes here
%%bash
OUTDIR=gs://${BUCKET}/babyweight/hyperparam
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--config=hyperparam.yaml \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--eval_steps=10 \
--train_examples=20000
"""
Explanation: When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was:
<pre>
Saving dict for global step 5714290: average_loss = 1.06473, global_step = 5714290, loss = 34882.4, rmse = 1.03186
</pre>
The final RMSE was 1.03 pounds.
<h2> Optional: Hyperparameter tuning </h2>
<p>
All of these are command-line parameters to my program. To do hyperparameter tuning, create hyperparam.xml and pass it as --configFile.
This step will take <b>1 hour</b> -- you can increase maxParallelTrials or reduce maxTrials to get it done faster. Since maxParallelTrials is the number of initial seeds to start searching from, you don't want it to be too large; otherwise, all you have is a random search.
#### **Exercise 6**
We need to create a .yaml file to pass with our hyperparameter tuning job. Fill in the TODOs below for each of the parameters we want to include in our hyperparameter search.
End of explanation
"""
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=20000 --batch_size=35 --nembeds=16 --nnsize=281
"""
Explanation: <h2> Repeat training </h2>
<p>
This time with tuned parameters (note last line)
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/nasa-giss/cmip6/models/sandbox-2/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-2', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
bioe-ml-w18/bioe-ml-winter2018
|
homeworks/SVMs-example.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting defaults
import seaborn as sns; sns.set()
"""
Explanation: This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub. The text is released under the CC-BY-NC-ND license, and code is released under the MIT license.
In-Depth: Support Vector Machines
End of explanation
"""
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn');
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plt.plot([0.6], [2.1], 'x', color='red', markeredgewidth=2, markersize=10)
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
"""
Explanation: Motivating Support Vector Machines
End of explanation
"""
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
"""
Explanation: Support Vector Machines: Maximizing the Margin
Support vector machines offer one way to improve on this.
The intuition is this: rather than simply drawing a zero-width line between the classes, we can draw around each line a margin of some width, up to the nearest point.
Here is an example of how this might look:
End of explanation
"""
from sklearn.svm import SVC # "Support vector classifier"
model = SVC(kernel='linear', C=1E10)
model.fit(X, y)
"""
Explanation: Fitting a support vector machine
Let's see the result of an actual fit to this data: we will use Scikit-Learn's support vector classifier to train an SVM model on this data.
For the time being, we will use a linear kernel and set the C parameter to a very large number (we'll discuss the meaning of these in more depth momentarily).
End of explanation
"""
def plot_svc_decision_function(model, ax=None, plot_support=True):
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
x = np.linspace(xlim[0], xlim[1], 30)
y = np.linspace(ylim[0], ylim[1], 30)
Y, X = np.meshgrid(y, x)
xy = np.vstack([X.ravel(), Y.ravel()]).T
P = model.decision_function(xy).reshape(X.shape)
# plot decision boundary and margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
if plot_support:
ax.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, linewidth=1, facecolors='none');
ax.set_xlim(xlim)
ax.set_ylim(ylim)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(model);
model.support_vectors_
def plot_svm(N=10, ax=None):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
model = SVC(kernel='linear', C=1E10)
model.fit(X, y)
ax = ax or plt.gca()
ax.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
ax.set_xlim(-1, 4)
ax.set_ylim(-1, 6)
plot_svc_decision_function(model, ax)
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
for axi, N in zip(ax, [60, 120]):
plot_svm(N, axi)
axi.set_title('N = {0}'.format(N))
from ipywidgets import interact, fixed
interact(plot_svm, N=[10, 200], ax=fixed(None));
"""
Explanation: To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us:
End of explanation
"""
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
clf = SVC(kernel='linear').fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf, plot_support=False);
r = np.exp(-(X ** 2).sum(1))
"""
Explanation: Beyond linear boundaries: Kernel SVM
Where SVM becomes extremely powerful is when it is combined with kernels.
We have seen a version of kernels before, in the basis function regressions of In Depth: Linear Regression.
There we projected our data into higher-dimensional space defined by polynomials and Gaussian basis functions, and thereby were able to fit for nonlinear relationships with a linear classifier.
In SVM models, we can use a version of the same idea.
To motivate the need for kernels, let's look at some data that is not linearly separable:
End of explanation
"""
from mpl_toolkits import mplot3d
def plot_3D(elev=30, azim=30, X=X, y=y):
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='autumn')
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
interact(plot_3D, elev=[-90, 90], azip=(-180, 180),
X=fixed(X), y=fixed(y));
clf = SVC(kernel='rbf', C=1E6)
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=300, lw=1, facecolors='none');
"""
Explanation: We can visualize this extra data dimension using a three-dimensional plot—if you are running this notebook live, you will be able to use the sliders to rotate the plot:
End of explanation
"""
X, y = make_blobs(n_samples=100, centers=2,
random_state=0, cluster_std=1.2)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn');
X, y = make_blobs(n_samples=100, centers=2,
random_state=0, cluster_std=0.8)
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
for axi, C in zip(ax, [10.0, 0.1]):
model = SVC(kernel='linear', C=C).fit(X, y)
axi.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(model, axi)
axi.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, lw=1, facecolors='none');
axi.set_title('C = {0:.1f}'.format(C), size=14)
"""
Explanation: Tuning the SVM: Softening Margins
Our discussion thus far has centered around very clean datasets, in which a perfect decision boundary exists.
But what if your data has some amount of overlap?
For example, you may have data like this:
End of explanation
"""
|
prashantas/MyDataScience
|
PandasPractice.ipynb
|
bsd-2-clause
|
import numpy as np
import pandas as pd
labels = ['a','b','c']
my_data = [10,20,30]
arr = np.array(my_data)
d = {'a':10,'b':20,'c':30}
print ("Labels:", labels)
print("My data:", my_data)
print("Dictionary:", d)
"""
Explanation: Pandas Practice
Series
Loading packages and initializations
End of explanation
"""
pd.Series(data=my_data) # Output looks very similar to a NumPy array
pd.Series(data=my_data, index=labels) # Note the extra information about index
# Inputs are in order of the expected parameters (not explicitly named), NumPy array is used for data
pd.Series(arr, labels)
pd.Series(d) # Using a pre-defined Dictionary object
"""
Explanation: Creating a Series (Pandas class)
From numerical data only
From numerical data and corresponding index (row labels)
From NumPy array as the source of numerical data
Just using a pre-defined dictionary
End of explanation
"""
print ("\nHolding numerical data\n",'-'*25, sep='')
print(pd.Series(arr))
print ("\nHolding text labels\n",'-'*20, sep='')
print(pd.Series(labels))
print ("\nHolding functions\n",'-'*20, sep='')
print(pd.Series(data=[sum,print,len]))
print ("\nHolding objects from a dictionary\n",'-'*40, sep='')
print(pd.Series(data=[d.keys, d.items, d.values]))
"""
Explanation: What type of values can a Pandas Series hold?
End of explanation
"""
ser1 = pd.Series([1,2,3,4],['CA', 'OR', 'CO', 'AZ'])
ser2 = pd.Series([1,2,5,4],['CA', 'OR', 'NV', 'AZ'])
print ("\nIndexing by name of the item/object (string identifier)\n",'-'*56, sep='')
print("Value for CA in ser1:", ser1['CA'])
print("Value for AZ in ser1:", ser1['AZ'])
print("Value for NV in ser2:", ser2['NV'])
print ("\nIndexing by number (positional value in the series)\n",'-'*52, sep='')
print("Value for CA in ser1:", ser1[0])
print("Value for AZ in ser1:", ser1[3])
print("Value for NV in ser2:", ser2[2])
print ("\nIndexing by a range\n",'-'*25, sep='')
print ("Value for OR, CO, and AZ in ser1:\n", ser1[1:4], sep='')
"""
Explanation: Indexing and slicing
End of explanation
"""
ser1 = pd.Series([1,2,3,4],['CA', 'OR', 'CO', 'AZ'])
ser2 = pd.Series([1,2,5,4],['CA', 'OR', 'NV', 'AZ'])
ser3 = ser1+ser2
print ("\nAfter adding the two series, the result looks like this...\n",'-'*59, sep='')
print(ser3)
print("\nPython tries to add values where it finds common index name, and puts NaN where indices are missing\n")
print ("\nThe idea works even for multiplication...\n",'-'*43, sep='')
print (ser1*ser2)
print ("\nOr even for combination of mathematical operations!\n",'-'*53, sep='')
print (np.exp(ser1)+np.log10(ser2))
"""
Explanation: Adding/Merging two series with common indices
End of explanation
"""
from numpy.random import randn as rn
"""
Explanation: DataFrame (the Real Meat!)
End of explanation
"""
np.random.seed(101)
matrix_data = rn(5,4)
row_labels = ['A','B','C','D','E']
column_headings = ['W','X','Y','Z']
df = pd.DataFrame(data=matrix_data, index=row_labels, columns=column_headings)
print("\nThe data frame looks like\n",'-'*45, sep='')
print(df)
"""
Explanation: Creating and accessing DataFrame
Indexing
Adding and deleting rows and columns
Subsetting DataFrame
End of explanation
"""
print("\nThe 'X' column\n",'-'*25, sep='')
print(df['X'])
print("\nType of the column: ", type(df['X']), sep='')
print("\nThe 'X' and 'Z' columns indexed by passing a list\n",'-'*55, sep='')
print(df[['X','Z']])
print("\nType of the pair of columns: ", type(df[['X','Z']]), sep='')
print ("\nSo, for more than one column, the object turns into a DataFrame")
print("\nThe 'X' column accessed by DOT method (NOT recommended)\n",'-'*55, sep='')
print(df.X)
"""
Explanation: Indexing and slicing (columns)
By bracket method
By DOT method (NOT recommended)
End of explanation
"""
print("\nA column is created by assigning it in relation to an existing column\n",'-'*75, sep='')
df['New'] = df['X']+df['Z']
df['New (Sum of X and Z)'] = df['X']+df['Z']
print(df)
print("\nA column is dropped by using df.drop() method\n",'-'*55, sep='')
df = df.drop('New', axis=1) # Notice the axis=1 option, axis = 0 is default, so one has to change it to 1
print(df)
df1=df.drop('A')
print("\nA row (index) is dropped by using df.drop() method and axis=0\n",'-'*65, sep='')
print(df1)
print("\nAn in-place change can be done by making inplace=True in the drop method\n",'-'*75, sep='')
df.drop('New (Sum of X and Z)', axis=1, inplace=True)
print(df)
"""
Explanation: Creating and deleting a (new) column (or row)
End of explanation
"""
print("\nLabel-based 'loc' method can be used for selecting row(s)\n",'-'*60, sep='')
print("\nSingle row\n")
print(df.loc['C'])
print("\nMultiple rows\n")
print(df.loc[['B','C']])
print("\nIndex position based 'iloc' method can be used for selecting row(s)\n",'-'*70, sep='')
print("\nSingle row\n")
print(df.iloc[2])
print("\nMultiple rows\n")
print(df.iloc[[1,2]])
"""
Explanation: Selecting/indexing Rows
Label-based 'loc' method
Index (numeric) 'iloc' method
End of explanation
"""
np.random.seed(101)
matrix_data = rn(5,4)
row_labels = ['A','B','C','D','E']
column_headings = ['W','X','Y','Z']
df = pd.DataFrame(data=matrix_data, index=row_labels, columns=column_headings)
print("\nThe DatFrame\n",'-'*45, sep='')
print(df)
print("\nElement at row 'B' and column 'Y' is\n")
print(df.loc['B','Y'])
print("\nSubset comprising of rows B and D, and columns W and Y, is\n")
df.loc[['B','D'],['W','Y']]
"""
Explanation: Subsetting DataFrame
End of explanation
"""
print("\nThe DataFrame\n",'-'*45, sep='')
print(df)
print("\nBoolean DataFrame(s) where we are checking if the values are greater than 0\n",'-'*75, sep='')
print(df>0)
print("\n")
print(df.loc[['A','B','C']]>0)
booldf = df>0
print("\nDataFrame indexed by boolean dataframe\n",'-'*45, sep='')
print(df[booldf])
"""
Explanation: Conditional selection, index (re)setting, multi-index
Basic idea of conditional check and Boolean DataFrame
End of explanation
"""
matrix_data = np.matrix('22,66,140;42,70,148;30,62,125;35,68,160;25,62,152')
row_labels = ['A','B','C','D','E']
column_headings = ['Age', 'Height', 'Weight']
df = pd.DataFrame(data=matrix_data, index=row_labels, columns=column_headings)
print("\nA new DataFrame\n",'-'*25, sep='')
print(df)
print("\nRows with Height > 65 inch\n",'-'*35, sep='')
print(df[df['Height']>65])
booldf1 = df['Height']>65
booldf2 = df['Weight']>145
print("\nRows with Height > 65 inch and Weight >145 lbs\n",'-'*55, sep='')
print(df[(booldf1) & (booldf2)])
print("\nDataFrame with only Age and Weight columns whose Height > 65 inch\n",'-'*68, sep='')
print(df[booldf1][['Age','Weight']])
"""
Explanation: Passing Boolean series to conditionally subset the DataFrame
End of explanation
"""
matrix_data = np.matrix('22,66,140;42,70,148;30,62,125;35,68,160;25,62,152')
row_labels = ['A','B','C','D','E']
column_headings = ['Age', 'Height', 'Weight']
df = pd.DataFrame(data=matrix_data, index=row_labels, columns=column_headings)
print("\nThe DataFrame\n",'-'*25, sep='')
print(df)
print("\nAfter resetting index\n",'-'*35, sep='')
print(df.reset_index())
print("\nAfter resetting index with 'drop' option TRUE\n",'-'*45, sep='')
print(df.reset_index(drop=True))
print("\nAdding a new column 'Profession'\n",'-'*45, sep='')
df['Profession'] = "Student Teacher Engineer Doctor Nurse".split()
print(df)
print("\nSetting 'Profession' column as index\n",'-'*45, sep='')
print (df.set_index('Profession'))
"""
Explanation: Re-setting and Setting Index
End of explanation
"""
# Index Levels
outside = ['G1','G1','G1','G2','G2','G2']
inside = [1,2,3,1,2,3]
hier_index = list(zip(outside,inside))
print("\nTuple pairs after the zip and list command\n",'-'*45, sep='')
print(hier_index)
hier_index = pd.MultiIndex.from_tuples(hier_index)
print("\nIndex hierarchy\n",'-'*25, sep='')
print(hier_index)
print("\nIndex hierarchy type\n",'-'*25, sep='')
print(type(hier_index))
print("\nCreating DataFrame with multi-index\n",'-'*37, sep='')
np.random.seed(101)
df1 = pd.DataFrame(data=np.round(rn(6,3),2), index= hier_index, columns= ['A','B','C'])
print(df1)
print("\nSubsetting multi-index DataFrame using two 'loc' methods\n",'-'*60, sep='')
print(df1.loc['G2'].loc[[1,3]][['B','C']])
print("\nNaming the indices by 'index.names' method\n",'-'*45, sep='')
df1.index.names=['Outer', 'Inner']
print(df1)
"""
Explanation: Multi-indexing
End of explanation
"""
print("\nGrabbing a cross-section from outer level\n",'-'*45, sep='')
print(df1.xs('G1'))
print("\nGrabbing a cross-section from inner level (for all outer levels)\n",'-'*65, sep='')
print(df1.xs(2,level='Inner'))
"""
Explanation: Cross-section ('XS') command
End of explanation
"""
df = pd.DataFrame({'A':[1,2,np.nan],'B':[5,np.nan,np.nan],'C':[1,2,3]})
df['States']="CA NV AZ".split()
df.set_index('States',inplace=True)
print(df)
"""
Explanation: Missing Values
End of explanation
"""
print("\nDropping any rows with a NaN value\n",'-'*35, sep='')
print(df.dropna(axis=0))
print("\nDropping any column with a NaN value\n",'-'*35, sep='')
print(df.dropna(axis=1))
print("\nDropping a row with a minimum 2 NaN value using 'thresh' parameter\n",'-'*68, sep='')
print(df.dropna(axis=0, thresh=2))
"""
Explanation: Pandas 'dropna' method
End of explanation
"""
print("\nFilling values with a default value\n",'-'*35, sep='')
print(df.fillna(value='FILL VALUE'))
print("\nFilling values with a computed value (mean of column A here)\n",'-'*60, sep='')
print(df.fillna(value=df['A'].mean()))
"""
Explanation: Pandas 'fillna' method
End of explanation
"""
# Create dataframe
data = {'Company':['GOOG','GOOG','MSFT','MSFT','FB','FB'],
'Person':['Sam','Charlie','Amy','Vanessa','Carl','Sarah'],
'Sales':[200,120,340,124,243,350]}
df = pd.DataFrame(data)
df
byComp = df.groupby('Company')
print("\nGrouping by 'Company' column and listing mean sales\n",'-'*55, sep='')
print(byComp.mean())
print("\nGrouping by 'Company' column and listing sum of sales\n",'-'*55, sep='')
print(byComp.sum())
# Note dataframe conversion of the series and transpose
print("\nAll in one line of command (Stats for 'FB')\n",'-'*65, sep='')
print(pd.DataFrame(df.groupby('Company').describe().loc['FB']).transpose())
print("\nSame type of extraction with little different command\n",'-'*68, sep='')
print(df.groupby('Company').describe().loc[['GOOG', 'MSFT']])
"""
Explanation: GroupBy method
End of explanation
"""
# Creating data frames
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8,9,10,11])
print("\nThe DataFrame number 1\n",'-'*30, sep='')
print(df1)
print("\nThe DataFrame number 2\n",'-'*30, sep='')
print(df2)
print("\nThe DataFrame number 3\n",'-'*30, sep='')
print(df3)
df_cat1 = pd.concat([df1,df2,df3], axis=0)
print("\nAfter concatenation along row\n",'-'*30, sep='')
print(df_cat1)
df_cat2 = pd.concat([df1,df2,df3], axis=1)
print("\nAfter concatenation along column\n",'-'*60, sep='')
print(df_cat2)
df_cat2.fillna(value=0, inplace=True)
print("\nAfter filling missing values with zero\n",'-'*60, sep='')
print(df_cat2)
"""
Explanation: Merging, Joining, Concatenating
Concatenation
End of explanation
"""
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
print("\nThe DataFrame 'left'\n",'-'*30, sep='')
print(left)
print("\nThe DataFrame 'right'\n",'-'*30, sep='')
print(right)
merge1= pd.merge(left,right,how='inner',on='key')
print("\nAfter simple merging with 'inner' method\n",'-'*50, sep='')
print(merge1)
"""
Explanation: Merging by a common 'key'
The merge function allows you to merge DataFrames together using a similar logic as merging SQL Tables together.
End of explanation
"""
left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
left
right
pd.merge(left, right, on=['key1', 'key2'])
pd.merge(left, right, how='outer',on=['key1', 'key2'])
pd.merge(left, right, how='left',on=['key1', 'key2'])
pd.merge(left, right, how='right',on=['key1', 'key2'])
"""
Explanation: Merging on a set of keys
End of explanation
"""
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
left
right
left.join(right)
left.join(right, how='outer')
"""
Explanation: Joining
Joining is a convenient method for combining the columns of two potentially differently-indexed DataFrames into a single DataFrame based on 'index keys'.
End of explanation
"""
import pandas as pd
df = pd.DataFrame({'col1':[1,2,3,4,5,6,7,8,9,10],
'col2':[444,555,666,444,333,222,666,777,666,555],
'col3':'aaa bb c dd eeee fff gg h iii j'.split()})
df
print("\nMethod head() is for showing first few entries\n",'-'*50, sep='')
df.head()
print("\nFinding unique values in 'col2'\n",'-'*40, sep='') # Note 'unique' method applies to pd.series only
print(df['col2'].unique())
print("\nFinding number of unique values in 'col2'\n",'-'*45, sep='')
print(df['col2'].nunique())
print("\nTable of unique values in 'col2'\n",'-'*40, sep='')
t1=df['col2'].value_counts()
print(t1)
"""
Explanation: Useful operations
head() and unique values
head()
unique()
nunique()
value_count()
End of explanation
"""
# Define a function
def testfunc(x):
if (x> 500):
return (10*np.log10(x))
else:
return (x/10)
df['FuncApplied'] = df['col2'].apply(testfunc)
print(df)
"""
Explanation: Applying functions
Pandas work with 'apply' method to accept any user-defined function
End of explanation
"""
df['col3length']= df['col3'].apply(len)
print(df)
"""
Explanation: Apply works with built-in function too!
End of explanation
"""
df['FuncApplied'].apply(lambda x: np.sqrt(x))
"""
Explanation: Combine 'apply' with lambda expession for in-line calculations
End of explanation
"""
print("\nSum of the column 'FuncApplied' is: ",df['FuncApplied'].sum())
print("Mean of the column 'FuncApplied' is: ",df['FuncApplied'].mean())
print("Std dev of the column 'FuncApplied' is: ",df['FuncApplied'].std())
print("Min and max of the column 'FuncApplied' are: ",df['FuncApplied'].min(),"and",df['FuncApplied'].max())
"""
Explanation: Standard statistical functions directly apply to columns
End of explanation
"""
print("\nName of columns\n",'-'*20, sep='')
print(df.columns)
l = list(df.columns)
print("\nColumn names in a list of strings for later manipulation:",l)
"""
Explanation: Deletion, sorting, list of column and row names
Getting the names of the columns
End of explanation
"""
print("\nDeleting last column by 'del' command\n",'-'*50, sep='')
del df['col3length']
print(df)
df['col3length']= df['col3'].apply(len)
"""
Explanation: Deletion by 'del' command # This affects the dataframe immediately, unlike drop method.
End of explanation
"""
df.sort_values(by='col2') #inplace=False by default
df.sort_values(by='FuncApplied',ascending=False) #inplace=False by default
"""
Explanation: Sorting and Ordering a DataFrame
End of explanation
"""
df = pd.DataFrame({'col1':[1,2,3,np.nan],
'col2':[np.nan,555,666,444],
'col3':['abc','def','ghi','xyz']})
df.head()
df.isnull()
df.fillna('FILL')
"""
Explanation: Find Null Values or Check for Null Values
End of explanation
"""
data = {'A':['foo','foo','foo','bar','bar','bar'],
'B':['one','one','two','two','one','one'],
'C':['x','y','x','y','x','y'],
'D':[1,3,2,5,4,1]}
df = pd.DataFrame(data)
df
# Index out of 'A' and 'B', columns from 'C', actual numerical values from 'D'
df.pivot_table(values='D',index=['A', 'B'],columns=['C'])
# Index out of 'A' and 'B', columns from 'C', actual numerical values from 'D'
df.pivot_table(values='D',index=['A', 'B'],columns=['C'], fill_value='FILLED')
"""
Explanation: Pivot Table
End of explanation
"""
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Pandas built-in Visualization
Import packages
End of explanation
"""
df1=pd.read_csv('df1.csv', index_col=0)
df1.head()
df2=pd.read_csv('df2')
df2.head()
"""
Explanation: Read in the CSV data file
End of explanation
"""
df1['A'].hist()
"""
Explanation: Histogram of a single column
End of explanation
"""
df1.hist(column=['B','C'],bins=20,figsize=(10,4))
"""
Explanation: Histogram with a different set of arguments (list of columns, bins, figure size, etc)
End of explanation
"""
df1.plot(kind='hist', bins=30, grid=True, figsize=(12,7))
"""
Explanation: Histogram with generic plot method of Pandas
End of explanation
"""
import seaborn as sns #Plot style will change to Seaborn package style from now on
df2.plot.area(alpha=0.4)
"""
Explanation: Area plot
End of explanation
"""
df2.plot.bar()
df2.plot.bar(stacked=True)
"""
Explanation: Bar plot (with and without stacking)
End of explanation
"""
df1.plot.line(x=df1.index,y=['B','C'],figsize=(12,4),lw=1) # Note matplotlib arguments like 'lw' and 'figsize'
"""
Explanation: Lineplot
End of explanation
"""
df1.plot.scatter(x='A',y='B',figsize=(12,8))
df1.plot.scatter(x='A',y='B',c='C',cmap='coolwarm',figsize=(12,8)) # Color of the scatter dots set based on column C
df1.plot.scatter(x='A',y='B',s=10*np.exp(df1['C']),c='C',figsize=(12,8)) # Size of the dots set based on column C
"""
Explanation: Scatterplot
End of explanation
"""
df2.plot.box()
"""
Explanation: Boxplot
End of explanation
"""
df=pd.DataFrame(data=np.random.randn(1000,2),columns=['A','B'])
df.head()
df.plot.hexbin(x='A',y='B',gridsize=30,cmap='coolwarm')
"""
Explanation: Hexagonal bin plot for bivariate data
End of explanation
"""
df2.plot.density(lw=3)
"""
Explanation: Kernel density estimation
End of explanation
"""
|
AllenDowney/ThinkBayes2
|
examples/skeet2.ipynb
|
mit
|
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Hist, Pmf, Suite, Beta
import thinkplot
import numpy as np
"""
Explanation: Think Bayes
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2018 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
rhode = Beta(1, 1, label='Rhode')
rhode.Update((22, 11))
wei = Beta(1, 1, label='Wei')
wei.Update((21, 12))
"""
Explanation: Comparing distributions
Let's get back to the Kim Rhode problem from Chapter 4:
At the 2016 Summer Olympics in the Women's Skeet event, Kim Rhode faced Wei Meng in the bronze medal match. They each hit 15 of 25 targets, sending the match into sudden death. In the first round, both hit 1 of 2 targets. In the next two rounds, they each hit 2 targets. Finally, in the fourth round, Rhode hit 2 and Wei hit 1, so Rhode won the bronze medal, making her the first Summer Olympian to win an individual medal at six consecutive summer games.
But after all that shooting, what is the probability that Rhode is actually a better shooter than Wei? If the same match were held again, what is the probability that Rhode would win?
I'll start with a uniform distribution for x, the probability of hitting a target, but we should check whether the results are sensitive to that choice.
First I create a Beta distribution for each of the competitors, and update it with the results.
End of explanation
"""
thinkplot.Pdf(rhode.MakePmf())
thinkplot.Pdf(wei.MakePmf())
thinkplot.Config(xlabel='x', ylabel='Probability')
"""
Explanation: Based on the data, the distribution for Rhode is slightly farther right than the distribution for Wei, but there is a lot of overlap.
End of explanation
"""
iters = 1000
count = 0
for _ in range(iters):
x1 = rhode.Random()
x2 = wei.Random()
if x1 > x2:
count += 1
count / iters
"""
Explanation: To compute the probability that Rhode actually has a higher value of p, there are two options:
Sampling: we could draw random samples from the posterior distributions and compare them.
Enumeration: we could enumerate all possible pairs of values and add up the "probability of superiority".
I'll start with sampling. The Beta object provides a method that draws a random value from a Beta distribution:
End of explanation
"""
rhode_sample = rhode.Sample(iters)
wei_sample = wei.Sample(iters)
np.mean(rhode_sample > wei_sample)
"""
Explanation: Beta also provides Sample, which returns a NumPy array, so we an perform the comparisons using array operations:
End of explanation
"""
def ProbGreater(pmf1, pmf2):
total = 0
for x1, prob1 in pmf1.Items():
for x2, prob2 in pmf2.Items():
if x1 > x2:
total += prob1 * prob2
return total
pmf1 = rhode.MakePmf(1001)
pmf2 = wei.MakePmf(1001)
ProbGreater(pmf1, pmf2)
pmf1.ProbGreater(pmf2)
pmf1.ProbLess(pmf2)
"""
Explanation: The other option is to make Pmf objects that approximate the Beta distributions, and enumerate pairs of values:
End of explanation
"""
import random
def flip(p):
return random.random() < p
"""
Explanation: Exercise: Run this analysis again with a different prior and see how much effect it has on the results.
Simulation
To make predictions about a rematch, we have two options again:
Sampling. For each simulated match, we draw a random value of x for each contestant, then simulate 25 shots and count hits.
Computing a mixture. If we knew x exactly, the distribution of hits, k, would be binomial. Since we don't know x, the distribution of k is a mixture of binomials with different values of x.
I'll do it by sampling first.
End of explanation
"""
iters = 1000
wins = 0
losses = 0
for _ in range(iters):
x1 = rhode.Random()
x2 = wei.Random()
count1 = count2 = 0
for _ in range(25):
if flip(x1):
count1 += 1
if flip(x2):
count2 += 1
if count1 > count2:
wins += 1
if count1 < count2:
losses += 1
wins/iters, losses/iters
"""
Explanation: flip returns True with probability p and False with probability 1-p
Now we can simulate 1000 rematches and count wins and losses.
End of explanation
"""
rhode_rematch = np.random.binomial(25, rhode_sample)
thinkplot.Hist(Pmf(rhode_rematch))
wei_rematch = np.random.binomial(25, wei_sample)
np.mean(rhode_rematch > wei_rematch)
np.mean(rhode_rematch < wei_rematch)
"""
Explanation: Or, realizing that the distribution of k is binomial, we can simplify the code using NumPy:
End of explanation
"""
from thinkbayes2 import MakeBinomialPmf
def MakeBinomialMix(pmf, label=''):
mix = Pmf(label=label)
for x, prob in pmf.Items():
binom = MakeBinomialPmf(n=25, p=x)
for k, p in binom.Items():
mix[k] += prob * p
return mix
rhode_rematch = MakeBinomialMix(rhode.MakePmf(), label='Rhode')
wei_rematch = MakeBinomialMix(wei.MakePmf(), label='Wei')
thinkplot.Pdf(rhode_rematch)
thinkplot.Pdf(wei_rematch)
thinkplot.Config(xlabel='hits')
rhode_rematch.ProbGreater(wei_rematch), rhode_rematch.ProbLess(wei_rematch)
"""
Explanation: Alternatively, we can make a mixture that represents the distribution of k, taking into account our uncertainty about x:
End of explanation
"""
from thinkbayes2 import MakeMixture
def MakeBinomialMix2(pmf):
binomials = Pmf()
for x, prob in pmf.Items():
binom = MakeBinomialPmf(n=25, p=x)
binomials[binom] = prob
return MakeMixture(binomials)
"""
Explanation: Alternatively, we could use MakeMixture:
End of explanation
"""
rhode_rematch = MakeBinomialMix2(rhode.MakePmf())
wei_rematch = MakeBinomialMix2(wei.MakePmf())
rhode_rematch.ProbGreater(wei_rematch), rhode_rematch.ProbLess(wei_rematch)
"""
Explanation: Here's how we use it.
End of explanation
"""
iters = 1000
pmf = Pmf()
for _ in range(iters):
k = rhode_rematch.Random() + wei_rematch.Random()
pmf[k] += 1
pmf.Normalize()
thinkplot.Hist(pmf)
"""
Explanation: Exercise: Run this analysis again with a different prior and see how much effect it has on the results.
Distributions of sums and differences
Suppose we want to know the total number of targets the two contestants will hit in a rematch. There are two ways we might compute the distribution of this sum:
Sampling: We can draw samples from the distributions and add them up.
Enumeration: We can enumerate all possible pairs of values.
I'll start with sampling:
End of explanation
"""
ks = rhode_rematch.Sample(iters) + wei_rematch.Sample(iters)
pmf = Pmf(ks)
thinkplot.Hist(pmf)
"""
Explanation: Or we could use Sample and NumPy:
End of explanation
"""
def AddPmfs(pmf1, pmf2):
pmf = Pmf()
for v1, p1 in pmf1.Items():
for v2, p2 in pmf2.Items():
pmf[v1 + v2] += p1 * p2
return pmf
"""
Explanation: Alternatively, we could compute the distribution of the sum by enumeration:
End of explanation
"""
pmf = AddPmfs(rhode_rematch, wei_rematch)
thinkplot.Pdf(pmf)
"""
Explanation: Here's how it's used:
End of explanation
"""
pmf = rhode_rematch + wei_rematch
thinkplot.Pdf(pmf)
"""
Explanation: The Pmf class provides a + operator that does the same thing.
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise: The Pmf class also provides the - operator, which computes the distribution of the difference in values from two distributions. Use the distributions from the previous section to compute the distribution of the differential between Rhode and Wei in a rematch. On average, how many clays should we expect Rhode to win by? What is the probability that Rhode wins by 10 or more?
End of explanation
"""
iters = 1000
pmf = Pmf()
for _ in range(iters):
ks = rhode_rematch.Sample(6)
pmf[max(ks)] += 1
pmf.Normalize()
thinkplot.Hist(pmf)
"""
Explanation: Distribution of maximum
Suppose Kim Rhode continues to compete in six more Olympics. What should we expect her best result to be?
Once again, there are two ways we can compute the distribution of the maximum:
Sampling.
Analysis of the CDF.
Here's a simple version by sampling:
End of explanation
"""
iters = 1000
ks = rhode_rematch.Sample((6, iters))
ks
"""
Explanation: And here's a version using NumPy. I'll generate an array with 6 rows and 10 columns:
End of explanation
"""
maxes = np.max(ks, axis=0)
maxes[:10]
"""
Explanation: Compute the maximum in each column:
End of explanation
"""
pmf = Pmf(maxes)
thinkplot.Hist(pmf)
"""
Explanation: And then plot the distribution of maximums:
End of explanation
"""
pmf = rhode_rematch.Max(6).MakePmf()
thinkplot.Hist(pmf)
"""
Explanation: Or we can figure it out analytically. If the maximum is less-than-or-equal-to some value k, all 6 random selections must be less-than-or-equal-to k, so:
$ CDF_{max}(x) = CDF(x)^6 $
Pmf provides a method that computes and returns this Cdf, so we can compute the distribution of the maximum like this:
End of explanation
"""
def Min(pmf, k):
cdf = pmf.MakeCdf()
cdf.ps = 1 - (1-cdf.ps)**k
return cdf
pmf = Min(rhode_rematch, 6).MakePmf()
thinkplot.Hist(pmf)
"""
Explanation: Exercise: Here's how Pmf.Max works:
def Max(self, k):
"""Computes the CDF of the maximum of k selections from this dist.
k: int
returns: new Cdf
"""
cdf = self.MakeCdf()
cdf.ps **= k
return cdf
Write a function that takes a Pmf and an integer n and returns a Pmf that represents the distribution of the minimum of k values drawn from the given Pmf. Use your function to compute the distribution of the minimum score Kim Rhode would be expected to shoot in six competitions.
End of explanation
"""
|
jamesjia94/BIDMach
|
tutorials/NVIDIA/.ipynb_checkpoints/CreateNets-checkpoint.ipynb
|
bsd-3-clause
|
import BIDMat.{CMat,CSMat,DMat,Dict,IDict,Image,FMat,FND,GDMat,GMat,GIMat,GSDMat,GSMat,HMat,IMat,Mat,SMat,SBMat,SDMat}
import BIDMat.MatFunctions._
import BIDMat.SciFunctions._
import BIDMat.Solvers._
import BIDMat.JPlotting._
import BIDMach.Learner
import BIDMach.models.{FM,GLM,KMeans,KMeansw,ICA,LDA,LDAgibbs,Model,NMF,RandomForest,SFA,SVD}
import BIDMach.networks.{Net}
import BIDMach.datasources.{DataSource,MatSource,FileSource,SFileSource}
import BIDMach.mixins.{CosineSim,Perplexity,Top,L1Regularizer,L2Regularizer}
import BIDMach.updaters.{ADAGrad,Batch,BatchNorm,IncMult,IncNorm,Telescoping}
import BIDMach.causal.{IPTW}
Mat.checkMKL
Mat.checkCUDA
Mat.setInline
if (Mat.hasCUDA > 0) GPUmem
"""
Explanation: Creating Deep Networks
In this last section, we'll construct our networks from scratch.
The target problem is again classification of Higgs Boson data.
Let's load BIDMat/BIDMach
End of explanation
"""
val dir = "/code/BIDMach/data/uci/Higgs/parts/"
"""
Explanation: And define the root directory for this dataset.
End of explanation
"""
val (mm, opts) = Net.learner(dir+"data%03d.fmat.lz4", dir+"label%03d.fmat.lz4")
"""
Explanation: Constructing a deep network Learner
The "Net" class is the parent class for Deep networks. By defining a learner, we also configure a datasource, an optimization method, and possibly a regularizer.
End of explanation
"""
opts.hasBias = true; // Include additive bias in linear layers
opts.links = iones(1,1); // The link functions specify output loss, 1= logistic
opts.nweight = 1e-4f // weight for normalization layers
"""
Explanation: The next step is to define the network to run. First we set some options:
End of explanation
"""
import BIDMach.networks.layers.Node._
import BIDMach.networks.layers.NodeSet
"""
Explanation: Now we define the network itself. We'll import a couple of classes that define convenience functions to generate the nodes in the network.
End of explanation
"""
val in = input; // An input node
val lin1 = linear(in)(outdim = 1000, hasBias = opts.hasBias); // A linear layer
val sig1 = σ(lin1) // A sigmoid layer
val norm1 = norm(sig1)(weight = opts.nweight) // A normalization layer
val lin2 = linear(norm1)(outdim = 1, hasBias = opts.hasBias); // A linear layer
val out = glm(lin2)(irow(1)) // Output GLM layer
"""
Explanation: Now we'll define the network itself. Each layer is represented by a function of (one or more) input layers.
Layers have optional arguments, specified in curried form (second group of parentheses). The layer types include:
* input layer - mandatory as the first layer.
* linear layer - takes an input, and an optional output dimension and bias
* sigmoid layer - σ or "sigmoid" takes a single input
* tanh layer - "tanh" with a single input
* rectifying layer - "rect" with a single input (output = max(0,input))
* softplus layer - "softplus" with a single input
* normalization layer - takes an input and a weight parameter
* output GLM layer - expects a "links" option with integer values which specify the type of link function, 1=logistic
End of explanation
"""
val mynodes = Array(in, lin1, sig1, norm1, lin2, out);
opts.nodeset = new NodeSet(mynodes.length, mynodes)
"""
Explanation: Finally we assemble the net by placing the elements in an array, and passing a NodeSet from them to the Learner.
End of explanation
"""
opts.nend = 10 // The last file number in the datasource
opts.npasses = 5 // How many passes to make over the data
opts.batchSize = 200 // The minibatch size
opts.evalStep = 511 // Count of minibatch between eval steps
opts.lrate = 0.01f; // Learning rate
opts.texp = 0.4f; // Time exponent for ADAGRAD
"""
Explanation: Tuning Options
Here follow some tuning options
End of explanation
"""
mm.train
"""
Explanation: You invoke the learner the same way as before.
End of explanation
"""
val model = mm.model.asInstanceOf[Net]
val ta = loadFMat(dir + "data%03d.fmat.lz4" format 10);
val tc = loadFMat(dir + "label%03d.fmat.lz4" format 10);
val (nn,nopts) = Net.predictor(model, ta);
nopts.batchSize=10000
"""
Explanation: Now lets extract the model and use it to predict labels on a held-out sample of data.
End of explanation
"""
nn.predict
"""
Explanation: Let's run the predictor
End of explanation
"""
val pc = FMat(nn.preds(0))
val rc = roc(pc, tc, 1-tc, 1000);
mean(rc)
plot(rc)
"""
Explanation: To evaluate, we extract the predictions as a floating matrix, and then compute a ROC curve with them. The mean of this curve is the AUC (Area Under the Curve).
End of explanation
"""
|
linhbngo/cpsc-4770_6770
|
05-intro-to-mpi.ipynb
|
gpl-3.0
|
%%writefile codes/openmpi/first.c
#include <stdio.h>
#include <sys/utsname.h>
#include <mpi.h>
int main(int argc, char *argv[]){
MPI_Init(&argc, &argv);
struct utsname uts;
uname (&uts);
printf("My process is on node %s.\n", uts.nodename);
MPI_Finalize();
return 0;
}
!mpicc codes/openmpi/first.c -o ~/first
!mpirun -np 2 ~/first
"""
Explanation: <center>Introduction to Message Passing Interface (MPI)</center>
<center> Linh B. Ngo </center>
<center>Message Passing
Processes communicate via messages
Messages can be
Raw data to be used in actual calculations
Signals and acknowledgements for the receiving processes regarding the workflow
<center>History of MPI
Early 80s:
- Various message passing environments were developed
- Many similar fundamental concepts
- N-cube (Caltech), P4 (Argonne), PICL and PVM (Oakridge), LAM (Ohio SC)
1992:
- More than 80 reseachers from different institutions in US and Europe agreed to develop and implement a common standard for message passing
- First meeting colocated with Supercomputing 1992
After finalization:
- MPI becomes the de-factor standard for distributed memory parallel programming
- Available on every popular operating system and architecture
- Interconnect manufacturers commonly provide MPI implementations optimized for their hardware
- MPI standard defines interfaces for C, C++, and Fortran
- Language bindings available for many popular languages (quality varies)
1994: MPI-1
- Communicators
- Information about the runtime environments
- Creation of customized topologies
- Point-to-point communication
- Send and receive messages
- Blocking and non-blocking variations
- Collectives
- Broadcast and reduce
- Gather and scatter
1998: MPI-2
- One-sided communication (non-blocking)
- Get & Put (remote memory access)
- Dynamic process management
- Spawn
- Parallel I/O
- Multiple readers and writers for a single file
- Requires file-system level support (LustreFS, PVFS)
2012: MPI-3
- Revised remote-memory access semantic
- Fault tolerance model
- Non-blocking collective communication
- Access to internal variables, states, and counters for performance evaluation purposes
<center> Set up MPI on Palmetto for C/C++
Open a terminal and run the following commands. For ssh-keygen, hit Enter for everything and don't put in a password when asked.
$ cd ~
$ sudo yum -y group install "Development Tools"
$ wget https://download.open-mpi.org/release/open-mpi/v3.1/openmpi-3.1.2.tar.gz
$ tar xzf openmpi-3.1.2.tar.gz
$ cd openmpi-3.1.2
$ ./configure --prefix=/opt/openmpi/3.1.2
$ sudo make
$ sudo make all install
$ echo "export PATH='$PATH:/opt/openmpi/3.1.2/bin'" >> ~/.bashrc
$ echo "export LD_LIBRARY_PATH='$LD_LIBRARY_PATH:/opt/openmpi/3.1.2/lib/'" >> ~/.bashrc
$ source ~/.bashrc
$ cd ~
$ ssh-keygen
$ ssh-copy-id localhost
After the above steps are completed successfully, you will need to return to the VM, run the command source ~/.bashrc and restart the Jupyter notebook server.
The next cells can be run from inside the Jupyter notebook.
End of explanation
"""
%%writefile codes/openmpi/hello.c
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
int size;
int my_rank;
char proc_name[MPI_MAX_PROCESSOR_NAME];
int proc_name_len;
/* Initialize the MPI environment */
MPI_Init(&argc, &argv);
/* Get the number of processes */
MPI_Comm_size(MPI_COMM_WORLD, &size);
/* Get the rank of the process */
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
/* Get the name of the processor */
MPI_Get_processor_name(proc_name, &proc_name_len);
/* Print off a hello world message */
printf("Hello world from processor %s, rank %d out of %d processes\n",
proc_name, my_rank, size);
/* Finalize the MPI environment. */
MPI_Finalize();
}
!mpicc codes/openmpi/hello.c -o ~/hello
!mpirun -np 2 ~/hello
"""
Explanation: <center> The working of MPI in a nutshell
All processes are launched at the beginning of the program execution
The number of processes are user-speficied
Typically, this number is matched to the total number of cores available across the entire cluster
All processes have their own memory space and have access to the same source codes
Basic parameters available to individual processes:
MPI.COMM_WORLD
MPI_Comm_rank()
MPI_Comm_size()
MPI_Get_processor_name()
MPI defines communicator groups for point-to-point and collective communications
Unique IDs (rank) are defined for individual processes within a communicator group
Communications are performed based on these IDs
Default global communication (MPI_COMM_WORLD) contains all processes
For $N$ processes, ranks go from $0$ to $N-1$
End of explanation
"""
!mpirun -np 8 ~/hello
!mpirun -np 8 --map-by core:OVERSUBSCRIBE ~/hello
"""
Explanation: Important
On many VM, you might not have enough physical cores located to the VM in VirtualBox. To enable the simulation of multiple processes, you need to add --map-by core:OVERSUBSCRIBE to your mpirun commands
End of explanation
"""
%%writefile codes/openmpi/evenodd.c
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
int my_rank;
/* Initialize the MPI environment */
MPI_Init(&argc, &argv);
/* Get the rank of the process */
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
if (my_rank % 2 == 1) {
printf ("Process %d is even \n", my_rank);
} else {
printf ("Process %d is odd \n", my_rank);
}
MPI_Finalize();
}
!mpicc codes/openmpi/evenodd.c -o ~/evenodd
!mpirun -np 4 --map-by core:OVERSUBSCRIBE ~/evenodd
"""
Explanation: Ranks are used to enforce execution/exclusion of code segments within the original source code
End of explanation
"""
%%writefile codes/openmpi/rank_size.c
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
int size;
int my_rank;
int A[16] = {2,13,4,3,5,1,0,12,10,8,7,9,11,6,15,14};
int i;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
for (i = 0; i < 16; i++){
if (i % size == my_rank){
printf ("Process %d has elements %d at index %d \n",
my_rank, A[i], i);
}
}
/* Finalize the MPI environment. */
MPI_Finalize();
}
!mpicc codes/openmpi/rank_size.c -o ~/rank_size
!mpirun -np 4 ~/rank_size
"""
Explanation: Ranks and size are used means to calculate and distributed workload (data) among the processes
End of explanation
"""
We want to write an MPI program that exchange the ranks of two processes, 0 and 1.
%%writefile codes/openmpi/send_recv.c
#include <mpi.h>
#include <stdio.h>
#include <string.h>
int main(int argc, char** argv)
{
int my_rank;
int size;
int tag=0;
int buf,i;
int des1,des2;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
/* set up data */
buf = my_rank;
printf("Process %2d has original value %2d \n",my_rank,buf);
if (my_rank == 0){
MPI_Send(&buf,1,MPI_INT,1,tag,MPI_COMM_WORLD);
MPI_Recv(&buf,1,MPI_INT,1,tag,MPI_COMM_WORLD,&status);
}
if (my_rank == 1){
MPI_Recv(&buf,1,MPI_INT,0,tag,MPI_COMM_WORLD,&status);
MPI_Send(&buf,1,MPI_INT,0,tag,MPI_COMM_WORLD);
}
printf("Process %2d now has value %2d\n",my_rank,buf);
MPI_Finalize();
} /* end main */
!mpicc codes/openmpi/send_recv.c -o ~/send_recv
!mpirun -np 2 ~/send_recv
"""
Explanation: Individual processes rely on communication (message passing) to enforce workflow
Point-to-point Communication
Collective Communication
<center> Point-to-Point: Send and Receive
Original MPI C Syntax: MPI_Send
int MPI_Send(void *buf,
int count,
MPI_Datatype datatype,
int dest,
int tag,
MPI_Comm comm)
MPI_Datatype may be MPI_BYTE, MPI_PACKED, MPI_CHAR, MPI_SHORT, MPI_INT, MPI_LONG, MPI_FLOAT, MPI_DOUBLE, MPI_LONG_DOUBLE, MPI_UNSIGNED_CHAR
dest is the rank of the process the message is sent to
tag is an integer identify the message. Programmer is responsible for managing tag
Original MPI C Syntax: MPI_Recv
int MPI_Recv(
void *buf,
int count,
MPI_Datatype datatype,
int source,
int tag,
MPI_Comm comm,
MPI_Status *status)
MPI_Datatype may be MPI_BYTE, MPI_PACKED, MPI_CHAR, MPI_SHORT, MPI_INT, MPI_LONG, MPI_FLOAT, MPI_DOUBLE, MPI_LONG_DOUBLE, MPI_UNSIGNED_CHAR
source is the rank of the process from which the message was sent.
tag is an integer identify the message. MPI_Recv will only place data in the buffer if the tag from MPI_Send matches. The constant MPI_ANY_TAG may be used when the source tag is unknown or not important.
End of explanation
"""
%%writefile ~/send_recv_fixed.c
#include <mpi.h>
#include <stdio.h>
#include <string.h>
int main(int argc, char** argv)
{
int my_rank;
int size;
int tag=0;
int buf,i;
int des1,des2;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
/* set up data */
buf = my_rank;
printf("Process %2d has original value %2d \n",my_rank,buf);
if (my_rank == 0){
MPI_Send(&buf,1,MPI_INT,1,tag,MPI_COMM_WORLD);
MPI_Recv(&buf,1,MPI_INT,1,tag,MPI_COMM_WORLD,&status);
}
if (my_rank == 1){
MPI_Recv(&buf,1,MPI_INT,0,tag,MPI_COMM_WORLD,&status);
MPI_Send(&buf,1,MPI_INT,0,tag,MPI_COMM_WORLD);
}
printf("Process %2d now has value %2d\n",my_rank,buf);
MPI_Finalize();
} /* end main */
!mpicc ~/send_recv_fixed.c -o ~/send_recv_fixed
!mpirun -np 2 ~/send_recv_fixed
"""
Explanation: What went wrong?
End of explanation
"""
%%writefile codes/openmpi/multi_send_recv.c
#include <mpi.h>
#include <stdio.h>
#include <string.h>
int main(int argc, char** argv)
{
int my_rank;
int size;
int tag=0;
int buf,i;
int des1,des2;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
/* set up data */
buf = my_rank;
//printf("Process %2d has original value %2d \n",my_rank,buf);
/* set up source and destination */
des1 = (my_rank + 1) % size;
des2 = (my_rank + size - 1) % size;
//printf("Process %2d has des1 %2d and des2 %2d\n",my_rank,des1,des2);
/* shift the data n/2 steps */
for (i = 0; i < size/2; i++){
MPI_Send(&buf,1,MPI_INT,des1,tag,MPI_COMM_WORLD);
MPI_Recv(&buf,1,MPI_INT,MPI_ANY_SOURCE,tag,MPI_COMM_WORLD,&status);
MPI_Barrier(MPI_COMM_WORLD);
}
MPI_Send(&buf,1,MPI_INT,des2,tag,MPI_COMM_WORLD);
MPI_Recv(&buf,1,MPI_INT,MPI_ANY_SOURCE,tag,MPI_COMM_WORLD,&status);
MPI_Barrier(MPI_COMM_WORLD);
printf("Process %2d now has value %2d\n",my_rank,buf);
/* Shut down MPI */
MPI_Finalize();
} /* end main */
!mpicc codes/openmpi/multi_send_recv.c -o ~/multi_send_recv
!mpirun -np 4 --map-by core:OVERSUBSCRIBE ~/multi_send_recv
"""
Explanation: How do we do point-to-point communication at scale?
- Rely on rank and size
End of explanation
"""
%%writefile codes/openmpi/deadlock_send_recv.c
#include <mpi.h>
#include <stdio.h>
#include <string.h>
int main(int argc, char* argv[])
{
int my_rank; /* rank of process */
int size; /* number of processes */
int source; /* rank of sender */
int dest; /* rank of receiver */
int tag=0; /* tag for messages */
char message[100]; /* storage for message */
MPI_Status status; /* return status for receive */
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
fprintf(stderr,"I am here! ID = %d\n", my_rank);
sprintf(message, "Greetings from process %d!", my_rank);
if (my_rank == 0) {
dest = 1;
MPI_Recv(message, 100, MPI_CHAR, dest, tag, MPI_COMM_WORLD, &status);
MPI_Send(message, strlen(message)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD);
printf("Process 0 printing: %s\n", message);
}
else {
/* my rank == 1 */
dest = 0;
MPI_Recv(message, 100, MPI_CHAR, dest, tag, MPI_COMM_WORLD, &status);
MPI_Send(message, strlen(message)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD);
printf("Process 1 printing: %s\n", message);
}
MPI_Finalize();
} /* end main */
!mpicc codes/openmpi/deadlock_send_recv.c -o ~/deadlock_send_recv
!mpirun -np 2 ~/deadlock_send_recv
# The [*] is indicative of a running notebook shell, and if it does not turn into a number,
# it means the cell is hanged (deadlocked by MPI).
# To escape a hanged cell, click the Square (Stop) button in the tool bar
"""
Explanation: Blocking risks
- Send data larger than available network buffer (Blocking send)
- Lost data (or missing sender) leading to receiver hanging indefinitely (Blocking receive)
End of explanation
"""
%%writefile codes/openmpi/bcast.c
#include <stdio.h>
#include <mpi.h>
int main(int argc, char* argv[])
{
int my_rank;
int size;
int value;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
value = my_rank;
printf("process %d: Before MPI_Bcast, value is %d\n", my_rank, value);
MPI_Bcast(&value, 1, MPI_INT, 0, MPI_COMM_WORLD);
printf("process %d: After MPI_Bcast, value is %d\n", my_rank, value);
MPI_Finalize();
return 0;
}
!mpicc codes/openmpi/bcast.c -o ~/bcast
!mpirun -np 4 --map-by core:OVERSUBSCRIBE ~/bcast
"""
Explanation: To correct the above error, we need to change the order of the MPI_Recv and MPI_Send in the one of the communication code block
<center> Collective Communication
Must involve ALL processes within the scope of a communicator
Unexpected behavior, including programming failure, if even one process does not participate
Types of collective communications:
Synchronization: barrier
Data movement: broadcast, scatter/gather
Collective computation (aggregate data to perform computation): Reduce
<center> <img src="pictures/05/mpi-collective.png" width="700"/>
<sub> https://computing.llnl.gov/tutorials/mpi/ </sub>
</center>
int MPI_Bcast(
void *buf,
int count,
MPI_Datatype datatype,
int root,
MPI_Comm comm);
- Don’t need to specify a TAG or DESTINATION
- Must specify the SENDER (root)
- Blocking call for all processes
End of explanation
"""
%%writefile codes/openmpi/scatter.c
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
int size;
int my_rank;
int sendbuf[16] = {2,13,4,3,5,1,0,12,10,8,7,9,11,6,15,14};
int recvbuf[5];
int i;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Scatter(&sendbuf, 5, MPI_INT, &recvbuf, 5, MPI_INT, 0, MPI_COMM_WORLD);
for (i = 0; i < 5; i++){
printf ("Process %d has element %d at index %d in its recvbuf \n",
my_rank, recvbuf[i], i);
}
/* Finalize the MPI environment. */
MPI_Finalize();
}
!mpicc codes/openmpi/scatter.c -o ~/scatter
!mpirun -np 4 --map-by core:OVERSUBSCRIBE ~/scatter
"""
Explanation: Original MPI C Syntax: MPI_Scatter
int MPI_Scatter(
void *sendbuf,
int sendcount,
MPI_Datatype sendtype,
void *recvbuf,
int recvcnt,
MPI_Datatype recvtype,
int root,
MPI_Comm comm);
End of explanation
"""
%%writefile codes/openmpi/gather.c
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
int size;
int my_rank;
int sendbuf[2];
int recvbuf[8] = {-1,-1,-1,-1,-1,-1,-1,-1};
int i;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
for (i = 0; i < 2; i++){
sendbuf[i] = my_rank;
}
MPI_Gather(&sendbuf, 2, MPI_INT, &recvbuf, 2, MPI_INT, 0, MPI_COMM_WORLD);
for (i = 0; i < 8; i++){
printf ("Process %d has element %d at index %d in its recvbuf \n",
my_rank, recvbuf[i], i);
}
/* Finalize the MPI environment. */
MPI_Finalize();
}
!mpicc codes/openmpi/gather.c -o ~/gather
!mpirun -np 4 --map-by core:OVERSUBSCRIBE ~/gather
"""
Explanation: Original MPI C Syntax: MPI_Gather
int MPI_Gather(
void *sendbuff,
int sendcount,
MPI_Datatype sendtype,
void *recvbuff,
int recvcnt,
MPI_Datatype recvtype,
int root,
MPI_Comm comm);
End of explanation
"""
%%writefile codes/openmpi/reduce.c
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
int size;
int my_rank;
int rank_sum;
int i;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
rank_sum = my_rank;
MPI_Reduce(&my_rank, &rank_sum, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);
printf ("The total sum of all ranks at process %d is %d \n", my_rank, rank_sum);
/* Finalize the MPI environment. */
MPI_Finalize();
}
!mpicc codes/openmpi/reduce.c -o ~/reduce
!mpirun -np 4 --map-by core:OVERSUBSCRIBE ~/reduce
!mpicc codes/openmpi/reduce.c -o ~/reduce
!mpirun -np 8 --map-by core:OVERSUBSCRIBE ~/reduce
"""
Explanation: Original MPI C Syntax: MPI_Reduce
int MPI_Reduce(
void *sendbuf,
void *recvbuff,
int count,
MPI_Datatype datatype,
MPI_OP op,
int root,
MPI_Comm comm);
- MPI_Op may be MPI_MIN, MPI_MAX, MPI_SUM, MPI_PROD (twelve total)
- Programmer may add operations, must be commutative and associative
- If count > 1, then operation is performed element-wise
End of explanation
"""
|
muku42/bokeh
|
examples/interactions/interactive_bubble/gapminder.ipynb
|
bsd-3-clause
|
fertility_df, life_expectancy_df, population_df_size, regions_df, years, regions = process_data()
sources = {}
region_color = regions_df['region_color']
region_color.name = 'region_color'
for year in years:
fertility = fertility_df[year]
fertility.name = 'fertility'
life = life_expectancy_df[year]
life.name = 'life'
population = population_df_size[year]
population.name = 'population'
new_df = pd.concat([fertility, life, population, region_color], axis=1)
sources['_' + str(year)] = ColumnDataSource(new_df)
"""
Explanation: Setting up the data
The plot animates with the slider showing the data over time from 1964 to 2013. We can think of each year as a seperate static plot, and when the slider moves, we use the Callback to change the data source that is driving the plot.
We could use bokeh-server to drive this change, but as the data is not too big we can also pass all the datasets to the javascript at once and switch between them on the client side.
This means that we need to build one data source for each year that we have data for and are going to switch between using the slider. We build them and add them to a dictionary sources that holds them under a key that is the name of the year preficed with a _.
End of explanation
"""
dictionary_of_sources = dict(zip([x for x in years], ['_%s' % x for x in years]))
js_source_array = str(dictionary_of_sources).replace("'", "")
"""
Explanation: sources looks like this
```
{'_1964': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165cc0>,
'_1965': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165b00>,
'_1966': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d1656a0>,
'_1967': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165ef0>,
'_1968': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9dac18>,
'_1969': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da9b0>,
'_1970': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da668>,
'_1971': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da0f0>...
```
We will pass this dictionary to the Callback. In doing so, we will find that in our javascript we have an object called, for example _1964 that refers to our ColumnDataSource. Note that we needed the prefixing _ as JS objects cannot begin with a number.
Finally we construct a string that we can insert into our javascript code to define an object.
The string looks like this: {1962: _1962, 1963: _1963, ....}
Note the keys of this object are integers and the values are the references to our ColumnDataSources from above. So that now, in our JS code, we have an object that's storing all of our ColumnDataSources and we can look them up.
End of explanation
"""
# Set up the plot
xdr = Range1d(1, 9)
ydr = Range1d(20, 100)
plot = Plot(
x_range=xdr,
y_range=ydr,
title="",
plot_width=800,
plot_height=400,
outline_line_color=None,
toolbar_location=None,
)
AXIS_FORMATS = dict(
minor_tick_in=None,
minor_tick_out=None,
major_tick_in=None,
major_label_text_font_size="10pt",
major_label_text_font_style="normal",
axis_label_text_font_size="10pt",
axis_line_color='#AAAAAA',
major_tick_line_color='#AAAAAA',
major_label_text_color='#666666',
major_tick_line_cap="round",
axis_line_cap="round",
axis_line_width=1,
major_tick_line_width=1,
)
xaxis = LinearAxis(SingleIntervalTicker(interval=1), axis_label="Children per woman (total fertility)", **AXIS_FORMATS)
yaxis = LinearAxis(SingleIntervalTicker(interval=20), axis_label="Life expectancy at birth (years)", **AXIS_FORMATS)
plot.add_layout(xaxis, 'below')
plot.add_layout(yaxis, 'left')
"""
Explanation: Build the plot
End of explanation
"""
# Add the year in background (add before circle)
text_source = ColumnDataSource({'year': ['%s' % years[0]]})
text = Text(x=2, y=35, text='year', text_font_size='150pt', text_color='#EEEEEE')
plot.add_glyph(text_source, text)
"""
Explanation: Add the background year text
We add this first so it is below all the other glyphs
End of explanation
"""
# Add the circle
renderer_source = sources['_%s' % years[0]]
circle_glyph = Circle(
x='fertility', y='life', size='population',
fill_color='region_color', fill_alpha=0.8,
line_color='#7c7e71', line_width=0.5, line_alpha=0.5)
circle_renderer = plot.add_glyph(renderer_source, circle_glyph)
# Add the hover (only against the circle and not other plot elements)
tooltips = "@index"
plot.add_tools(HoverTool(tooltips=tooltips, renderers=[circle_renderer]))
"""
Explanation: Add the bubbles and hover
We add the bubbles using the Circle glyph. We start from the first year of data and that is our source that drives the circles (the other sources will be used later).
plot.add_glyph returns the renderer, and we pass this to the HoverTool so that hover only happens for the bubbles on the page and not other glyph elements.
End of explanation
"""
text_x = 7
text_y = 95
for i, region in enumerate(regions):
plot.add_glyph(Text(x=text_x, y=text_y, text=[region], text_font_size='10pt', text_color='#666666'))
plot.add_glyph(Circle(x=text_x - 0.1, y=text_y + 2, fill_color=Spectral6[i], size=10, line_color=None, fill_alpha=0.8))
text_y = text_y - 5
"""
Explanation: Add the legend
Finally we manually build the legend by adding circles and texts to the upper-right portion of the plot.
End of explanation
"""
# Add the slider
code = """
var year = slider.get('value'),
sources = %s,
new_source_data = sources[year].get('data');
renderer_source.set('data', new_source_data);
text_source.set('data', {'year': [String(year)]});
""" % js_source_array
callback = CustomJS(args=sources, code=code)
slider = Slider(start=years[0], end=years[-1], value=1, step=1, title="Year", callback=callback, name='testy')
callback.args["renderer_source"] = renderer_source
callback.args["slider"] = slider
callback.args["text_source"] = text_source
"""
Explanation: Add the slider and callback
Last, but not least, we add the slider widget and the JS callback code which changes the data of the renderer_source (powering the bubbles / circles) and the data of the text_source (powering background text). After we've set() the data we need to trigger() a change. slider, renderer_source, text_source are all available because we add them as args to Callback.
It is the combination of sources = %s % (js_source_array) in the JS and Callback(args=sources...) that provides the ability to look-up, by year, the JS version of our python-made ColumnDataSource.
End of explanation
"""
# Stick the plot and the slider together
layout = vplot(plot, slider)
# Open our custom template
with open('gapminder_template.jinja', 'r') as f:
template = Template(f.read())
# Use inline resources
resources = Resources(mode='inline')
template_variables = {
'bokeh_min_js': resources.js_raw[0]
}
html = file_html(layout, resources, "Bokeh - Gapminder Bubble Plot", template=template, template_variables=template_variables)
display(HTML(html))
"""
Explanation: Embed in a template and render
Last but not least, we use vplot to stick togethre the chart and the slider. And we embed that in a template we write using the script, div output from components.
We display it in IPython and save it as an html file.
End of explanation
"""
|
musketeer191/job_analytics
|
.ipynb_checkpoints/feat_extract-checkpoint.ipynb
|
gpl-3.0
|
doc_skill = buildDocSkillMat(jd_docs, skill_df, folder=SKILL_DIR)
with(open(SKILL_DIR + 'doc_skill.mtx', 'w')) as f:
mmwrite(f, doc_skill)
"""
Explanation: Build feature matrix
The matrix is a JD-Skill matrix where each entry $e(d, s)$ is the number of times skill $s$ occurs in job description $d$.
End of explanation
"""
extracted_skill_df = getSkills4Docs(docs=doc_index['doc'], doc_term=doc_skill, skills=skills)
df = pd.merge(doc_index, extracted_skill_df, left_index=True, right_index=True)
print(df.shape)
df.head()
# sanity check
# df.head(3).to_csv(LDA_DIR + 'tmp/skills_3_sample_docs.csv', index=False)
df.to_csv(SKILL_DIR + 'doc_index.csv') # later no need to extract skill again
"""
Explanation: Get skills in each JD
Using the matrix, we can retrieve skills in each JD.
End of explanation
"""
|
sdpython/ensae_teaching_cs
|
_doc/notebooks/exams/td_note_2017_2.ipynb
|
mit
|
from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 1A.e - TD noté, 21 février 2017
Solution du TD noté, celui-ci présente un algorithme pour calculer les coefficients d'une régression quantile et par extension d'une médiane dans un espace à plusieurs dimensions.
End of explanation
"""
import random
def ensemble_aleatoire(n):
res = [random.randint(0, 100) for i in range(n)]
res[0] = 1000
return res
ens = ensemble_aleatoire(10)
ens
"""
Explanation: Précision : dans tout l'énoncé, la transposée d'une matrice est notée $X' = X^{T}$. La plupart du temps $X$ et $Y$ désignent des vecteurs colonnes. $\beta$ désigne un vecteur ligne, $W$ une matrice diagonale.
Exercice 1
Q1
A l'aide du module random, générer un ensemble de points aléatoires.
End of explanation
"""
def mediane(ensemble):
tri = list(sorted(ensemble))
return tri[len(tri)//2]
mediane(ens)
"""
Explanation: Q2
La médiane d'un ensemble de points $\left{X_1, ..., X_n\right}$ est une valeur $X_M$ telle que :
$$\sum_i \mathbb{1}{X_i < X_m} = \sum_i \mathbb{1}{X_i > X_m}$$
Autrement dit, il y a autant de valeurs inférieures que supérieures à $X_M$. On obtient cette valeur en triant les éléments par ordre croissant et en prenant celui du milieu.
End of explanation
"""
def mediane(ensemble):
tri = list(sorted(ensemble))
if len(tri) % 2 == 0:
m = len(tri)//2
return (tri[m] + tri[m-1]) / 2
else:
return tri[len(tri)//2]
mediane(ens)
"""
Explanation: Q3
Lorsque le nombre de points est pair, la médiane peut être n'importe quelle valeur dans un intervalle. Modifier votre fonction de façon à ce que la fonction précédente retourne le milieu de la fonction.
End of explanation
"""
from numpy.linalg import inv
def regression_lineaire(X, Y):
t = X.T
return inv(t @ X) @ t @ Y
import numpy
X = numpy.array(ens).reshape((len(ens), 1))
regression_lineaire(X, X+1) # un essai pour vérifier que la valeur n'est pas aberrante
"""
Explanation: Q4
Pour un ensemble de points $E=\left{X_1, ..., X_n\right}$, on considère la fonction suivante :
$$f(x) = \sum_{i=1}^n \left | x - X_i\right |$$.
On suppose que la médiane $X_M$ de l'ensemble $E$ n'appartient pas à $E$ : $X_M \notin E$. Que vaut $f'(X_M)$ ?
On acceptera le fait que la médiane est le seul point dans ce cas.
$$f'(X_m) = - \sum_{i=1}^n \mathbb{1}{X_i < X_m} + \sum{i=1}^n \mathbb{1}_{X_i > X_m}$$
Par définition de la médiane, $f'(X_M)=0$. En triant les éléments, on montre que la $f'(x) = 0 \Longleftrightarrow x=X_m$.
Q5
On suppose qu'on dispose d'un ensemble d'observations $\left(X_i, Y_i\right)$ avec $X_i, Y_i \in \mathbb{R}$.
La régression linéaire consiste une relation linéaire $Y_i = a X_i + b + \epsilon_i$
qui minimise la variance du bruit. On pose :
$$E(a, b) = \sum_i \left(Y_i - (a X_i + b)\right)^2$$
On cherche $a, b$ tels que :
$$a^, b^ = \arg \min E(a, b) = \arg \min \sum_i \left(Y_i - (a X_i + b)\right)^2$$
La fonction est dérivable et on trouve :
$$\frac{\partial E(a,b)}{\partial a} = - 2 \sum_i X_i ( Y_i - (a X_i + b)) \text{ et } \frac{\partial E(a,b)}{\partial b} = - 2 \sum_i ( Y_i - (a X_i + b))$$
Il suffit alors d'annuler les dérivées. On résoud un système d'équations linéaires. On note :
$$\begin{array}{l} \mathbb{E} X = \frac{1}{n}\sum_{i=1}^n X_i \text{ et } \mathbb{E} Y = \frac{1}{n}\sum_{i=1}^n Y_i \ \mathbb{E}{X^2} = \frac{1}{n}\sum_{i=1}^n X_i^2 \text{ et } \mathbb{E} {XY} = \frac{1}{n}\sum_{i=1}^n X_i Y_i \end{array}$$
Finalement :
$$\begin{array}{l} a^ = \frac{ \mathbb{E} {XY} - \mathbb{E} X \mathbb{E} Y}{\mathbb{E}{X^2} - (\mathbb{E} X)^2} \text{ et } b^ = \mathbb{E} Y - a^* \mathbb{E} X \end{array}$$
Lorsqu'on a plusieurs dimensions pour $X$, on écrit le problème d'optimisation, on cherche les coefficients $\beta^*$ qui minimisent :
$$E(\beta)=\sum_{i=1}^n \left(y_i - X_i \beta\right)^2 = \left \Vert Y - X\beta \right \Vert ^2$$
La solution est : $\beta^* = (X'X)^{-1}X'Y$.
Ecrire une fonction qui calcule ce vecteur optimal.
End of explanation
"""
def matrice_diagonale(W):
return numpy.diag(W)
matrice_diagonale([1, 2, 3])
"""
Explanation: Q6
Ecrire une fonction qui transforme un vecteur en une matrice diagonale.
End of explanation
"""
def regression_lineaire_ponderee(X, Y, W):
if len(W.shape) == 1 or W.shape[0] != W.shape[1]:
# c'est un vecteur
W = matrice_diagonale(W.ravel())
wx = W @ X
xt = X.T
return inv(xt @ wx) @ xt @ W @ Y
X = numpy.array(sorted(ens)).reshape((len(ens), 1))
Y = X.copy()
Y[0] = max(X)
W = numpy.ones(len(ens))
W[0] = 0
regression_lineaire_ponderee(X, Y, W), regression_lineaire(X, Y)
"""
Explanation: Q7
On considère maintenant que chaque observation est pondérée par un poids $w_i$. On veut maintenant trouver le vecteur $\beta$ qui minimise :
$$E(\beta)=\sum_{i=1}^n w_i \left( y_i - X_i \beta \right)^2 = \left \Vert W^{\frac{1}{2}}(Y - X\beta)\right \Vert^2$$
Où $W=diag(w_1, ..., w_n)$ est la matrice diagonale. La solution est :
$$\beta_* = (X'WX)^{-1}X'WY$$.
Ecrire une fonction qui calcule la solution de la régression pondérée. La fonction ravel est utile.
End of explanation
"""
def calcule_z(X, beta, Y, W, delta=0.0001):
epsilon = numpy.abs(Y - X @ beta)
return numpy.reciprocal(numpy.maximum(epsilon, numpy.ones(epsilon.shape) * delta))
calcule_z(X * 1.0, numpy.array([[1.01]]), Y, W)
"""
Explanation: Q8
Ecrire une fonction qui calcule les quantités suivantes (fonctions maximum, reciprocal).
$$z_i = \frac{1}{\max\left( \delta, \left|y_i - X_i \beta\right|\right)}$$
End of explanation
"""
def algorithm(X, Y, delta=0.0001):
W = numpy.ones(X.shape[0])
for i in range(0, 10):
beta = regression_lineaire_ponderee(X, Y, W)
W = calcule_z(X, beta, Y, W, delta=delta)
E = numpy.abs(Y - X @ beta).sum()
print(i, E, beta)
return beta
X = numpy.random.rand(10, 1)
Y = X*2 + numpy.random.rand()
Y[0] = Y[0] + 100
algorithm(X, Y)
regression_lineaire(X, Y)
"""
Explanation: Q9
On souhaite coder l'algorithme suivant :
$w_i^{(1)} = 1$
$\beta_{(t)} = (X'W^{(t)}X)^{-1}X'W^{(t)}Y$
$w_i^{(t+1)} = \frac{1}{\max\left( \delta, \left|y_i - X_i \beta^{(t)}\right|\right)}$
$t = t+1$
Retour à l'étape 2.
End of explanation
"""
ens = ensemble_aleatoire(10)
Y = numpy.empty((len(ens), 1))
Y[:,0] = ens
X = numpy.ones((len(ens), 1))
mediane(ens)
Y.mean(axis=0)
regression_lineaire(X, Y)
algorithm(X,Y)
mediane(ens)
list(sorted(ens))
"""
Explanation: Q10
End of explanation
"""
import numpy
y = numpy.array([1, 2, 3])
M = numpy.array([[3, 4], [6, 7], [3, 3]])
M.shape, y.shape
try:
M @ y
except Exception as e:
print(e)
"""
Explanation: La régression linéaire égale la moyenne, l'algorithme s'approche de la médiane.
Quelques explications et démonstrations
Cet énoncé est inspiré de Iteratively reweighted least squares. Cet algorithme permet notamment d'étendre la notion de médiane à des espaces vectoriels de plusieurs dimensions. On peut détermine un point $X_M$ qui minimise la quantité :
$$\sum_{i=1}^n \left| X_i - X_M \right |$$
Nous reprenons l'algorithme décrit ci-dessus :
$w_i^{(1)} = 1$
$\beta_{(t)} = (X'W^{(t)}X)^{-1}X'W^{(t)}Y$
$w_i^{(t+1)} = \frac{1}{\max\left( \delta, \left|y_i - X_i \beta^{(t)}\right|\right)}$
$t = t+1$
Retour à l'étape 2.
L'erreur quadratique pondéré est :
$$E_2(\beta, W) = \sum_{i=1}^n w_i \left\Vert Y_i - X_i \beta \right\Vert^2$$
Si $w_i = \frac{1}{\left|y_i - X_i \beta\right|}$, on remarque que :
$$E_2(\beta, W) = \sum_{i=1}^n \frac{\left\Vert Y_i - X_i \beta \right\Vert^2}{\left|y_i - X_i \beta\right|} = \sum_{i=1}^n \left|y_i - X_i \beta\right| = E_1(\beta)$$
On retombe sur l'erreur en valeur absolue optimisée par la régression quantile. Comme l'étape 2 consiste à trouver les coefficients $\beta$ qui minimise $E_2(\beta, W^{(t)})$, par construction, il ressort que :
$$E_1(\beta^{(t+1)}) = E_2(\beta^{(t+1)}, W^{(t)}) \leqslant E_2(\beta^{(t)}, W^{(t)}) = E_1(\beta^{(t)})$$
La suite $t \rightarrow E_1(\beta^{(t)})$ est suite décroissant est minorée par 0. Elle converge donc vers un minimum. Or la fonction $\beta \rightarrow E_1(\beta)$ est une fonction convexe. Elle n'admet qu'un seul minimum (mais pas nécessaire un seul point atteignant ce minimum). L'algorithme converge donc vers la médiane. Le paramètre $\delta$ est là pour éviter les erreurs de divisions par zéros et les approximations de calcul faites par l'ordinateur.
Quelques commentaires sur le code
Le symbol @ a été introduit par Python 3.5 et est équivalent à la fonction numpy.dot. Les dimensions des matrices posent souvent quelques problèmes.
End of explanation
"""
y @ M
"""
Explanation: Par défaut, numpy considère un vecteur de taille (3,) comme un vecteur ligne (3,1). Donc l'expression suivante va marcher :
End of explanation
"""
M.T @ y
"""
Explanation: Ou :
End of explanation
"""
|
NEONScience/NEON-Data-Skills
|
tutorials/Python/Hyperspectral/uncertainty-and-validation/hyperspectral_variation_py/hyperspectral_variation_py.ipynb
|
agpl-3.0
|
import h5py
import csv
import numpy as np
import os
import gdal
import matplotlib.pyplot as plt
import sys
from math import floor
import time
import warnings
warnings.filterwarnings('ignore')
def h5refl2array(h5_filename):
hdf5_file = h5py.File(h5_filename,'r')
#Get the site name
file_attrs_string = str(list(hdf5_file.items()))
file_attrs_string_split = file_attrs_string.split("'")
sitename = file_attrs_string_split[1]
refl = hdf5_file[sitename]['Reflectance']
reflArray = refl['Reflectance_Data']
refl_shape = reflArray.shape
wavelengths = refl['Metadata']['Spectral_Data']['Wavelength']
#Create dictionary containing relevant metadata information
metadata = {}
metadata['shape'] = reflArray.shape
metadata['mapInfo'] = refl['Metadata']['Coordinate_System']['Map_Info']
#Extract no data value & set no data value to NaN\n",
metadata['scaleFactor'] = float(reflArray.attrs['Scale_Factor'])
metadata['noDataVal'] = float(reflArray.attrs['Data_Ignore_Value'])
metadata['bad_band_window1'] = (refl.attrs['Band_Window_1_Nanometers'])
metadata['bad_band_window2'] = (refl.attrs['Band_Window_2_Nanometers'])
metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value
metadata['EPSG'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value)
mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value
mapInfo_string = str(mapInfo); #print('Map Info:',mapInfo_string)\n",
mapInfo_split = mapInfo_string.split(",")
#Extract the resolution & convert to floating decimal number
metadata['res'] = {}
metadata['res']['pixelWidth'] = mapInfo_split[5]
metadata['res']['pixelHeight'] = mapInfo_split[6]
#Extract the upper left-hand corner coordinates from mapInfo\n",
xMin = float(mapInfo_split[3]) #convert from string to floating point number\n",
yMax = float(mapInfo_split[4])
#Calculate the xMax and yMin values from the dimensions\n",
xMax = xMin + (refl_shape[1]*float(metadata['res']['pixelWidth'])) #xMax = left edge + (# of columns * resolution)\n",
yMin = yMax - (refl_shape[0]*float(metadata['res']['pixelHeight'])) #yMin = top edge - (# of rows * resolution)\n",
metadata['extent'] = (xMin,xMax,yMin,yMax),
metadata['ext_dict'] = {}
metadata['ext_dict']['xMin'] = xMin
metadata['ext_dict']['xMax'] = xMax
metadata['ext_dict']['yMin'] = yMin
metadata['ext_dict']['yMax'] = yMax
hdf5_file.close
return reflArray, metadata, wavelengths
print('Starting BRDF Analysis')
"""
Explanation: syncID: bb90898de165446f9a0e92e1399f4697
title: "Hyperspectral Variation Uncertainty Analysis in Python"
description: "Learn to analyze the difference between rasters taken a few days apart to assess the uncertainty between days."
dateCreated: 2017-06-21
authors: Tristan Goulden
contributors: Donal O'Leary
estimatedTime: 0.5 hour
packagesLibraries: numpy, gdal, matplotlib
topics: hyperspectral-remote-sensing, remote-sensing
languagesTool: python
dataProduct:
code1: Python/remote-sensing/uncertainty/hyperspectral_variation_py.ipynb
tutorialSeries: rs-uncertainty-py-series
urlTitle: hyperspectral-variation-py
This tutorial teaches how to open a NEON AOP HDF5 file with a function,
batch processing several HDF5 files, relative comparison between several
NIS observations of the same target from different view angles, error checking.
<div id="ds-objectives" markdown="1">
### Objectives
After completing this tutorial, you will be able to:
* Open NEON AOP HDF5 files using a function
* Batch process several HDF5 files
* Complete relative comparisons between several imaging spectrometer observations of the same target from different view angles
* Error check the data.
### Install Python Packages
* **numpy**
* **csv**
* **gdal**
* **matplotlib.pyplot**
* **h5py**
* **time**
### Download Data
To complete this tutorial, you will use data available from the NEON 2017 Data
Institute teaching dataset available for download.
This tutorial will use the files contained in the 'F07A' Directory in <a href="https://neondata.sharefile.com/share/view/cdc8242e24ad4517/fo0c2f24-c7d2-4c77-b297-015366afa9f4" target="_blank">this ShareFile Directory</a>. You will want to download the entire directory as a single ZIP file, then extract that file into a location where you store your data.
<a href="https://neondata.sharefile.com/share/view/cdc8242e24ad4517/fo0c2f24-c7d2-4c77-b297-015366afa9f4" class="link--button link--arrow">
Download Dataset</a>
Caution: This dataset includes all the data for the 2017 Data Institute,
including hyperspectral and lidar datasets and is therefore a large file (12 GB).
Ensure that you have sufficient space on your
hard drive before you begin the download. If not, download to an external
hard drive and make sure to correct for the change in file path when working
through the tutorial.
The LiDAR and imagery data used to create this raster teaching data subset
were collected over the
<a href="http://www.neonscience.org/" target="_blank"> National Ecological Observatory Network's</a>
<a href="http://www.neonscience.org/science-design/field-sites/" target="_blank" >field sites</a>
and processed at NEON headquarters.
The entire dataset can be accessed on the
<a href="http://data.neonscience.org" target="_blank"> NEON data portal</a>.
These data are a part of the NEON 2017 Remote Sensing Data Institute. The complete archive may be found here -<a href="https://neondata.sharefile.com/d-s11d5c8b9c53426db"> NEON Teaching Data Subset: Data Institute 2017 Data Set</a>
### Recommended Prerequisites
We recommend you complete the following tutorials prior to this tutorial to have
the necessary background.
1. <a href="https://www.neonscience.org/neon-aop-hdf5-py"> *NEON AOP Hyperspectral Data in HDF5 format with Python*</a>
1. <a href="https://www.neonscience.org/neon-hsi-aop-functions-python"> *Band Stacking, RGB & False Color Images, and Interactive Widgets in Python*</a>
1. <a href="https://www.neonscience.org/plot-spec-sig-python/"> *Plot a Spectral Signature in Python*</a>
</div>
The NEON AOP has flown several special flight plans called BRDF
(bi-directional reflectance distribution function) flights. These flights were
designed to quantify the the effect of observing targets from a variety of
different look-angles, and with varying surface roughness. This allows an
assessment of the sensitivity of the NEON imaging spectrometer (NIS) results to these paraemters. THe BRDF
flight plan takes the form of a star pattern with repeating overlapping flight
lines in each direction. In the center of the pattern is an area where nearly
all the flight lines overlap. This area allows us to retrieve a reflectance
curve of the same targat from the many different flight lines to visualize
how then change for each acquisition. The following figure displays a BRDF
flight plan as well as the number of flightlines (samples) which are
overlapping.
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/ORNL_BRDF_flightlines.jpg">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/ORNL_BRDF_flightlines.jpg"></a>
</figure>
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/ORNL_NumberSamples.png">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/ORNL_NumberSamples.png"></a>
<figcaption> Top: Flight lines from a bi-directional reflectance distribution
function flight at ORNL. Bottom: A graphical representation of the number of
samples in each area of the sampling.
Source: National Ecological Observatory Network (NEON)
</figcaption>
</figure>
To date (June 2017), the NEON AOP has flown a BRDF flight at SJER and SOAP (D17) and
ORNL (D07). We will work with the ORNL BRDF flight and retrieve reflectance
curves from up to 18 lines and compare them to visualize the differences in the
resulting curves. To reduce the file size, each of the BRDF flight lines have
been reduced to a rectangular area covering where all lines are overlapping,
additionally several of the ancillary rasters normally included have been
removed in order to reduce file size.
We'll start off by again adding necessary libraries and our NEON AOP HDF5 reader
function.
End of explanation
"""
BRDF_rectangle = np.array([[740315,3982265],[740928,3981839]],np.float)
"""
Explanation: First we will define the extents of the rectangular array containing the section from each BRDF flightline.
End of explanation
"""
x_coord = 740600
y_coord = 3982000
"""
Explanation: Next we will define the coordinates of the target of interest. These can be set as any coordinate pait that falls within the rectangle above, therefore the coordaintes must be in UTM Zone 16 N.
End of explanation
"""
if BRDF_rectangle[0,0] <= x_coord <= BRDF_rectangle[1,0] and BRDF_rectangle[1,1] <= y_coord <= BRDF_rectangle[0,1]:
print('Point in bounding area')
y_index = floor(x_coord - BRDF_rectangle[0,0])
x_index = floor(BRDF_rectangle[0,1] - y_coord)
else:
print('Point not in bounding area, exiting')
raise Exception('exit')
"""
Explanation: To prevent the function of failing, we will first check to ensure the coordinates are within the rectangular bounding box. If they are not, we throw an error message and exit from the script.
End of explanation
"""
## You will need to update this filepath for your local data directory
h5_directory = "/Users/olearyd/Git/data/F07A/"
"""
Explanation: Now we will define the location of the all the subset NEON AOP h5 files from the BRDF flight
End of explanation
"""
files = os.listdir(h5_directory)
h5_files = [i for i in files if i.endswith('.h5')]
"""
Explanation: Now we will grab all files / folders within the defined directory and then cycle through them and retain only the h5files
End of explanation
"""
print(h5_files)
fig=plt.figure()
ax = plt.subplot(111)
"""
Explanation: Now we will print the h5 files to make sure they have been included and set up a figure for plotting all of the reflectance curves
End of explanation
"""
for file in h5_files:
print('Working on ' + file)
[reflArray,metadata,wavelengths] = h5refl2array(h5_directory+file)
bad_band_window1 = (metadata['bad_band_window1'])
bad_band_window2 = (metadata['bad_band_window2'])
index_bad_window1 = [i for i, x in enumerate(wavelengths) if x > bad_band_window1[0] and x < bad_band_window1[1]]
index_bad_window2 = [i for i, x in enumerate(wavelengths) if x > bad_band_window2[0] and x < bad_band_window2[1]]
index_bad_windows = index_bad_window1+index_bad_window2
reflectance_curve = np.asarray(reflArray[y_index,x_index,:], dtype=np.float32)
if reflectance_curve[0] == metadata['noDataVal']:
continue
reflectance_curve[index_bad_windows] = np.nan
filename_split = (file).split("_")
ax.plot(wavelengths,reflectance_curve/metadata['scaleFactor'],label = filename_split[5]+' Reflectance')
"""
Explanation: Now we will begin cycling through all of the h5 files and retrieving the information we need also print the file that is currently being processed
Inside the for loop we will
1) read in the reflectance data and the associated metadata, but construct the file name from the generated file list
2) Determine the indexes of the water vapor bands (bad band windows) in order to mask out all of the bad bands
3) Read in the reflectance dataset using the NEON AOP H5 reader function
4) Check the first value the first value of the reflectance curve (actually any value). If it is equivalent to the NO DATA value, then coordainte chosen did not intersect a pixel for the flight line. We will just continue and move to the next line.
5) Apply NaN values to the areas contianing the bad bands
6) Split the contents of the file name so we can get the line number for labelling in the plot.
7) Plot the curve
End of explanation
"""
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
ax.legend(loc='center left',bbox_to_anchor=(1,0.5))
plt.title('BRDF Reflectance Curves at ' + str(x_coord) +' '+ str(y_coord))
plt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')
fig.savefig('BRDF_uncertainty_at_' + str(x_coord) +'_'+ str(y_coord)+'.png',dpi=500,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.show()
"""
Explanation: This plots the reflectance curves from all lines onto the same plot. Now, we will add the appropriate legend and plot labels, display and save the plot with the coordaintes in the file name so we can repeat the position of the target
End of explanation
"""
|
msultan/msmbuilder
|
examples/Fs-Peptide-command-line.ipynb
|
lgpl-2.1
|
# Work in a temporary directory
import tempfile
import os
os.chdir(tempfile.mkdtemp())
# Since this is running from an IPython notebook,
# we prefix all our commands with "!"
# When running on the command line, omit the leading "!"
! msmb -h
"""
Explanation: Modeling dynamics of FS Peptide
This example shows a typical, basic usage of the MSMBuilder command line to model dynamics of a protein system.
End of explanation
"""
! msmb FsPeptide --data_home ./
! tree
"""
Explanation: Get example data
End of explanation
"""
# Remember '\' is the line-continuation marker
# You can enter this command on one line
! msmb DihedralFeaturizer \
--out featurizer.pkl \
--transformed diheds \
--top fs_peptide/fs-peptide.pdb \
--trjs "fs_peptide/*.xtc" \
--stride 10
"""
Explanation: Featurization
The raw (x, y, z) coordinates from the simulation do not respect the translational and rotational symmetry of our problem. A Featurizer transforms cartesian coordinates into other representations. Here we use the DihedralFeaturizer to turn our data into phi and psi dihedral angles. Observe that the 264*3-dimensional space is reduced to 84 dimensions.
End of explanation
"""
! msmb RobustScaler \
-i diheds \
--transformed scaled_diheds.h5
"""
Explanation: Preprocessing
Since the range of values in our raw data can vary widely from feature to feature, we can scale values to reduce bias. Here we use the RobustScaler to center and scale our dihedral angles by their respective interquartile ranges.
End of explanation
"""
! msmb tICA -i scaled_diheds.h5 \
--out tica_model.pkl \
--transformed tica_trajs.h5 \
--n_components 4 \
--lag_time 2
"""
Explanation: Intermediate kinetic model: tICA
tICA is similar to principal component analysis (see "tICA vs. PCA" example). Note that the 84-dimensional space is reduced to 4 dimensions.
End of explanation
"""
from msmbuilder.dataset import dataset
ds = dataset('tica_trajs.h5')
%matplotlib inline
import msmexplorer as msme
import numpy as np
txx = np.concatenate(ds)
_ = msme.plot_histogram(txx)
"""
Explanation: tICA Histogram
We can histogram our data projecting along the two slowest degrees of freedom (as found by tICA). You have to do this in a python script.
End of explanation
"""
! msmb MiniBatchKMeans -i tica_trajs.h5 \
--transformed labeled_trajs.h5 \
--out clusterer.pkl \
--n_clusters 100 \
--random_state 42
"""
Explanation: Clustering
Conformations need to be clustered into states (sometimes written as microstates). We cluster based on the tICA projections to group conformations that interconvert rapidly. Note that we transform our trajectories from the 4-dimensional tICA space into a 1-dimensional cluster index.
End of explanation
"""
! msmb MarkovStateModel -i labeled_trajs.h5 \
--out msm.pkl \
--lag_time 2
"""
Explanation: MSM
We can construct an MSM from the labeled trajectories
End of explanation
"""
from msmbuilder.utils import load
msm = load('msm.pkl')
clusterer = load('clusterer.pkl')
assignments = clusterer.partial_transform(txx)
assignments = msm.partial_transform(assignments)
from matplotlib import pyplot as plt
msme.plot_free_energy(txx, obs=(0, 1), n_samples=10000,
pi=msm.populations_[assignments],
xlabel='tIC 1', ylabel='tIC 2')
plt.scatter(clusterer.cluster_centers_[msm.state_labels_, 0],
clusterer.cluster_centers_[msm.state_labels_, 1],
s=1e4 * msm.populations_, # size by population
c=msm.left_eigenvectors_[:, 1], # color by eigenvector
cmap="coolwarm",
zorder=3
)
plt.colorbar(label='First dynamical eigenvector')
plt.tight_layout()
"""
Explanation: Plot Free Energy Landscape
Subsequent plotting and analysis should be done from Python
End of explanation
"""
|
etendue/deep-learning
|
language-translation/dlnd_language_translation.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# implementation
source_sentences = source_text.split('\n')
source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_sentences]
target_sentences = target_text.split('\n')
target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] for sentence in target_sentences]
#append the '<EOS>' at the end of sentence
int_EOS = target_vocab_to_int['<EOS>']
target_id_text = [int_sentence + [int_EOS] for int_sentence in target_id_text]
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
"""
#
inputs = tf.placeholder(tf.int32,shape=(None,None),name="input")
targets = tf.placeholder(tf.int32,shape=(None,None),name="targets")
learning_rate = tf.placeholder(tf.float32,name="learning_rate")
keep_prob = tf.placeholder(tf.float32,name="keep_prob")
return inputs, targets, learning_rate, keep_prob
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
"""
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
#
int_GO = target_vocab_to_int['<GO>']
target_data =tf.reshape(target_data,[batch_size,-1])
#get data removing last column.
target_data_no_ending = tf.strided_slice(target_data,[0,0],[batch_size,-1],[1,1])
#create rist column with GO ID
target_data_head = tf.fill([batch_size, 1], int_GO)
#concatenate two parts
decoding_input = tf.concat([target_data_head, target_data_no_ending], 1)
return decoding_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input)
"""
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
"""
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
"""
#
single_cell = tf.contrib.rnn.LSTMCell(rnn_size)
single_cell = tf.contrib.rnn.DropoutWrapper(single_cell, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([single_cell] * num_layers)
outputs,final_state = tf.nn.dynamic_rnn(cell,rnn_inputs,dtype = tf.float32)
return final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
"""
#with tf.variable_scope(decoding_scope,reuse=True):
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
"""
#
#with tf.variable_scope(decoding_scope,reuse=True):
# Inference Decoder
decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state,dec_embeddings,
start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder_fn, scope=decoding_scope)
return inference_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
"""
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
"""
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
"""
#TODO: maximum length for decoder inference??
with tf.variable_scope("decoding") as decoding_scope:
# decode cell, shallbe put into scope?
single_cell = tf.contrib.rnn.LSTMCell(rnn_size)
# add dropout here
single_cell = tf.contrib.rnn.DropoutWrapper(single_cell, output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([single_cell] * num_layers)
# Output Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
# get decoding train logits
train_logits = decoding_layer_train(encoder_state,dec_cell,dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
# share variables
decoding_scope.reuse_variables()
infer_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, infer_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
"""
#
# Apply embedding to the input data for the encoder
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
# Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
# Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size)
target_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# Apply embedding to the target data for the decoder.
# Decoder Embedding: different with encode embedding, the dec_embeddings is required
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size], minval=-1, maxval=1))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, target_input)
# Decode the encoded input using your decoding_layer
train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size,
sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return train_logits, infer_logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
"""
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.002
# Dropout Keep Probability
keep_probability = 0.5
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import time
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
#convert letters to lower-case
sentence_lower = sentence.lower()
sentence_int = [vocab_to_int.get(word,vocab_to_int['<UNK>']) for word in sentence_lower.split()]
return sentence_int
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
sheikhomar/ml
|
scikit-learn.ipynb
|
mit
|
import numpy as np
"""
Explanation: Scikit-Learn
scikit-learn is a Python library that provides many machine learning algorithms via a consistent API known as the estimator.
End of explanation
"""
from sklearn.model_selection import train_test_split
# Let X be our input data consisting of
# 5 samples and 2 features
X = np.arange(10).reshape(5, 2)
# Let y be the target feature
y = [0, 1, 2, 3, 4]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
"""
Explanation: Validation Data
Using validation data, we avoid model overfitting. In general, we split our data set into two partitions:
training: used to construct the model.
test: represents the future data.
The test data is only used to make a final estimate of the generalisation error. It is never used for fine-tuning the model. We can use the train_test_split() method to randomly split our data into training and test sets.
End of explanation
"""
from sklearn.linear_model import LinearRegression
lr = LinearRegression(normalize=True)
print(lr) # outputs the name of the estimator and its hyperparameters
"""
Explanation: Estimators
An Estimator can be seen as a base class for any algorithm that learns from data. It can be a classification, regression or clustering algorithm or a transformer that extracts useful features from raw data.
Hyperparameters and Parameters
In the documentation, there is a distinction between hyperparameters and parameters. Hyperparameters refers to algorithm settings that is used to tune the algorithm itself. They are usually set when an estimator is initialised. Parameters on the other hand refers to the coefficients found by the learning algorithm.
End of explanation
"""
|
jhjungCode/pytorch-tutorial
|
01_Variables.ipynb
|
mit
|
import torch
a = torch.Tensor([1])
b = torch.Tensor([2])
print(a+b)
"""
Explanation: "1 더하기 2는?" 부터 시작하기
사실 누구나 다 아는 "hello world"부터 시작하고 싶었지만, 기본(주로사용하는) 변수가 정수나 실수라고 생각하시면 됩니다.
그래서 "1 + 2"를 계산하는 코드를 만들도록 하겠습니다.
End of explanation
"""
import torch
a = torch.Tensor([1, 1, 1])
b = torch.Tensor([2, 3, 4])
print(a+b)
"""
Explanation: 정말 간단합니다.
이제 1+2, 1+3, 1+4를 Tensor를 이용해서 계산하도록 하겠습니다.
[1, 1, 1]의 벡터를 [2, 3, 4]벡터와 합하면 됩니다. 답은 [3, 4, 5] 이겠죠
End of explanation
"""
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([1]), requires_grad = True)
y = x*x
y.backward()
print('x = ', x)
print('dy/dx', x.grad)
"""
Explanation: 정말 간단합니다.
이제 y=x*x의 미분값을 구해보도록 하겠습니다.
y미분값은 dy/dx = 2x 입니다.x는 1이니 미분값은 2x, 즉 2입니다.
(표기의 용이성때문에, 여기서 d는 전미분이 아니라 편미분 기호입니다.)
End of explanation
"""
import torch
from torch.autograd import Variable
x1 = Variable(torch.Tensor([1]), requires_grad = True)
x2 = Variable(torch.Tensor([2]), requires_grad = True)
y = x1*x1 + x2*x2
y.backward()
print('dy/dx1', x1.grad)
print('dy/dx2', x2.grad)
"""
Explanation: 정말 간단합니다.
변수가 x1, x2 두개인 출력값인 y인 경우는 어떻게 될까요?
y= (x1^2) + (x2^2)의 미분값을 구해보도록 하겠습니다.
dy/dx1 = 2(x1)이 dy/dx2 = 2(x2)입니다.
x1, x2 가 1, 2이면 각각 2와 4가 됩니다.
End of explanation
"""
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([1, 2]), requires_grad = True)
y = (x*x).sum()
y.backward()
print('dy/dx', x.grad)
"""
Explanation: 정말 간단합니다.
위의 예제에서 x1, x2를 x인 Tensor 변수 하나로 표현하겠습니다.
End of explanation
"""
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([1, 2]), requires_grad = True)
y = (x*x)
y.backward(torch.ones(2))
print('dy/dx', x.grad)
"""
Explanation: 여기서 주의 할 것은 backward를 할 z를 scalar 값으로 해주어야 한다는 것입니다. 만약 벡터인 y = x^2로 backward한다면 y.backward()함수에서 에러가 발생하게 됩니다.
만약 굳이 tensor이 y형태로 backward를 하고 싶은면, 다음같이 sum에 대한 y.grad를 이용하는 방법이 있습니다.
sum(y) = y1 + y2 이니깐 dz/dy = [d(sum)/dy1, d(sum)/dy2] = [1, 1] 입니다.
End of explanation
"""
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([1, 2]), requires_grad = True)
y = (x*x).sum()
y1 = (x*x).sum()
y.backward()
y1.backward()
print(x.grad)
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([1, 2]), requires_grad = True)
y = (x*x).sum()
y1 = (x*x).sum()
z = y + y1
z.backward()
print(x.grad)
"""
Explanation: session이 없는데 만약에 그래프를 계속 추가하면 어떻게 될까요?
y, y1, y2를 계속추가하면서 각변수를 backward시키는 것과
z = y + y1 + y2를 만들어서 z만을 backward시키는 것과 동일합니다.
End of explanation
"""
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([1, 2]), requires_grad = True)
y = (x*x).sum()
y1 = (x*x).sum()
y.backward(torch.Tensor([1]))
y1.backward(torch.Tensor([2]))
print(x.grad)
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([1, 2]), requires_grad = True)
y = (x*x).sum()
y1 = (x*x).sum()
z = y + 2*y1
z.backward()
print(x.grad)
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([1, 2]), requires_grad = True)
y = (x*x).sum()
y1 = (x*x).sum()
z = torch.cat((y, y1))
z.backward(torch.Tensor([1, 2]))
print(x.grad)
"""
Explanation: 이번에 위식을 약간 변형해보겠습니다. 각각 backward사용과 grad를 구현하는 방식의 차이점과 코딩구조의 차이를 이해하기만 하면 됩니다.
End of explanation
"""
|
jtwhite79/pyemu
|
verification/Freyberg/.ipynb_checkpoints/verify_null_space_proj-checkpoint.ipynb
|
bsd-3-clause
|
%matplotlib inline
import os
import shutil
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pyemu
"""
Explanation: verify pyEMU null space projection with the freyberg problem
End of explanation
"""
mc = pyemu.MonteCarlo(jco="freyberg.jcb",verbose=False,forecasts=[])
mc.drop_prior_information()
jco_ord = mc.jco.get(mc.pst.obs_names,mc.pst.par_names)
ord_base = "freyberg_ord"
jco_ord.to_binary(ord_base + ".jco")
mc.pst.control_data.parsaverun = ' '
mc.pst.write(ord_base+".pst")
"""
Explanation: instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian
End of explanation
"""
# setup the dirs to hold all this stuff
par_dir = "prior_par_draws"
proj_dir = "proj_par_draws"
parfile_base = os.path.join(par_dir,"draw_")
projparfile_base = os.path.join(proj_dir,"draw_")
if os.path.exists(par_dir):
shutil.rmtree(par_dir)
os.mkdir(par_dir)
if os.path.exists(proj_dir):
shutil.rmtree(proj_dir)
os.mkdir(proj_dir)
mc = pyemu.MonteCarlo(jco=ord_base+".jco")
# make some draws
mc.draw(10)
#for i in range(10):
# mc.parensemble.iloc[i,:] = i+1
#write them to files
mc.parensemble.index = [str(i+1) for i in range(mc.parensemble.shape[0])]
mc.parensemble.to_parfiles(parfile_base)
mc.parensemble.shape
"""
Explanation: Draw some vectors from the prior and write the vectors to par files
End of explanation
"""
exe = os.path.join("pnulpar.exe")
args = [ord_base+".pst","y","1","y","pnulpar_qhalfx.mat",parfile_base,projparfile_base]
in_file = os.path.join("misc","pnulpar.in")
with open(in_file,'w') as f:
f.write('\n'.join(args)+'\n')
os.system(exe + ' <'+in_file)
pnul_en = pyemu.ParameterEnsemble(mc.pst)
parfiles =[os.path.join(proj_dir,f) for f in os.listdir(proj_dir) if f.endswith(".par")]
pnul_en.read_parfiles(parfiles)
pnul_en.loc[:,"fname"] = pnul_en.index
pnul_en.index = pnul_en.fname.apply(lambda x:str(int(x.split('.')[0].split('_')[-1])))
f = pnul_en.pop("fname")
pnul_en.sort_index(axis=1,inplace=True)
pnul_en.sort_index(axis=0,inplace=True)
pnul_en
"""
Explanation: Run pnulpar
End of explanation
"""
print(mc.parensemble.istransformed)
mc.parensemble._transform()
en = mc.project_parensemble(nsing=1,inplace=False)
print(mc.parensemble.istransformed)
#en._back_transform()
en.sort_index(axis=1,inplace=True)
en.sort_index(axis=0,inplace=True)
en
#pnul_en.sort(inplace=True)
#en.sort(inplace=True)
diff = 100.0 * np.abs(pnul_en - en) / en
#diff[diff<1.0] = np.NaN
dmax = diff.max(axis=0)
dmax.sort_index(ascending=False,inplace=True)
dmax.plot(figsize=(10,10))
diff
en.loc[:,"wf6_2"]
pnul_en.loc[:,"wf6_2"]
"""
Explanation: Now for pyemu
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/mohc/cmip6/models/ukesm1-0-mmh/aerosol.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'ukesm1-0-mmh', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MOHC
Source ID: UKESM1-0-MMH
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
sfomel/ipython
|
Qingdao.ipynb
|
gpl-2.0
|
from m8r import view
view('data')
"""
Explanation: Section
List
One
Two
Link: See Madagascar
Explanation for the modeling part
We create a model in time-velocity space $(t_0,v)$, then we model data by spreading in time-diatance space $(t,x)$ over hyperbolas $t(x) = \sqrt{t_0^2 + \frac{x^2}{v^2}}$.
End of explanation
"""
%%file adjoint.scons
Flow('tran','data','veltran adj=y nv=100')
Result('tran','grey title="Velocity Transform" ')
view('tran')
!ls
%%file mute.scons
Flow('data1','tran','cut max2=0.5 | veltran adj=n x0=0.1 dx=0.01 nx=80')
Result('data1','grey title=Noise')
Flow('data2','data data1','add scale=1,-1 ${SOURCES[1]}')
Result('data2','grey title=Signal')
view('data1')
view('data2')
!sfdottest sfveltran nv=100 mod=tran.rsf dat=data.rsf
%%file inversion.scons
# Run inversion using Conjugate Gradients
Flow('inv','data tran',
'conjgrad veltran mod=${SOURCES[1]} nv=100 niter=10')
Result('inv','grey title="Velocity Transform (Inverse)" ')
Flow('idata1','inv','cut max2=0.5 | veltran adj=n x0=0.1 dx=0.01 nx=80')
Result('idata1','grey title=Noise')
Flow('idata2','data idata1','add scale=1,-1 ${SOURCES[1]}')
Result('idata2','grey title=Signal')
view('inv')
view('idata1')
view('idata2')
"""
Explanation: Let us go back to thr time-velocity space.
If $\mathbf{L}$ is a linear operator, $\mathbf{L}^T$ is the adjoint operator.
The dot-product test is
$\mathbf{d}^T\,\mathbf{L}\,\mathbf{m} = \mathbf{m}^T\,\mathbf{L}^T\,\mathbf{d}$
End of explanation
"""
|
mayank-johri/LearnSeleniumUsingPython
|
Section 1 - Core Python/Chapter 08 - Modules/Chapter8_Modules.ipynb
|
gpl-3.0
|
import os
def get_os_details():
print(os.name)
print(type(os.name))
print(os.path.abspath(os.path.curdir))
get_os_details()
"""
Explanation: Modules
A module is a file/directory containing Python definitions and statements.
In Python, modules are python files which can be imported into a program. They can contain any Python structure and run when imported. When imported for the very first time, they are compiled & stored in a binary file (with the extension ".pyc" or ".pyo"), have their own namespaces and support Doc Strings.
They are singleton objects (only one instance is loaded into memory, which is available globally for the program, thus only once they are executed)
The module’s name (as a string) is available as the value of the global variable _name_.
The modules are located by the interpreter through the list of folders PYTHONPATH (sys.path), which usually includes the current directory first.
The modules are loaded with the import statement. Thus, when using a module structure, it is necessary to identify the module. This is called absolute import.
If it is necessary to run the module again during the execution of the application, it will have to be loaded again with the reload() function.
End of explanation
"""
from <module> import <attribute/submodule(s)>
from os import name
print (type(name))
from os import name as os_name
print (os_name)
print
from os import environ, linesep
"""
Explanation: The from...import Statement
Python allows to import specific portions of the module instead of whole module by using from ... import statement.
The format of it is as follows
End of explanation
"""
from os import *
print(name)
"""
Explanation: NOTE: To avoid problems such as variable obfuscation, the absolute import is considered a better programming practice than the relative import.
The from ... import * Statement
This variant can be used to import all names that a module defines
NOTE : NEVER EVER EVER USE IT
End of explanation
"""
# File calc.py
# Function defined in module
def average(list): return float(sum(list)) / len(list)
"""
Explanation: Example of user defined module:
End of explanation
"""
# Imports calc module
import calc
l = [23, 54, 31, 77, 12, 34]
# Calls the function defined in calc
print (calc.average(l))
"""
Explanation: Example of module usage:
End of explanation
"""
if __name__ == "__main__":
# Code here will only be run
# if it is the main module
# and not when it is imported by another program/module
pass
"""
Explanation: The main module of a program has the variable __name__ equals to __main__, thus it is possible to test if the main module:
End of explanation
"""
"""
modutils => utility routines for modules
"""
import os.path
import sys
import glob
def find(txt):
"""find modules with name containing the parameter."""
resp = []
for path in sys.path:
mods = glob.glob('%s/*.py' % path)
for mod in mods:
if txt in os.path.basename(mod):
resp.append(mod)
return resp
"""
Explanation: That way it is easy to turn a program into a module.
Another module example:
End of explanation
"""
from os.path import getsize, getmtime
from time import localtime, asctime
import modutils
mods = modutils.find('os')
print("Valid attributes of module: ", dir(modutils))
for mod in mods:
tm = asctime(localtime(getmtime(mod)))
kb = getsize(mod) / 1024
print ('{0}: ({1} kbytes, {2})'.format(mod, kb, tm))
"""
Explanation: Example module use:
End of explanation
"""
import maya_util.db
## Magic of modules
### Hidden modules
# 1)
cond = false
if cond:
from A import amp1 as a
else:
from B import bmp1 as a
# 2)
if cond:
from A import a
print(a.test)
else:
from AA import aa as a
print(a.test)
# 3)
# Check how many times import statement is called
for i in range(1,10):
if cond:
import a
print(a.test)
else:
import aa as a
print(a.test)
###
## PLEASE ADD advance topics in it.
"""
Explanation: TIP: Splitting programs into modules makes it easy to reuse and locate faults in the code.
The Module Search Path
When a module is requested to be imported, the interpreter first searches for a built-in module with that name. If not found, it then searches for a file named <module_name>.py in a list of directories given by the variable sys.path. sys.path is initialized from these locations:
The directory containing the input script (or the current directory when no file is specified).
PYTHONPATH (a list of directory names, with the same syntax as the shell variable PATH).
The installation-dependent default.
Packages
NOTE: To be discussed after classes
Packages are a way of structuring Python’s module namespace by using “dotted module names”. For example, the module name A.B designates a submodule named B in a package named A. Just like the use of modules saves the authors of different modules from having to worry about each other’s global variable names, the use of dotted module names saves the authors of multi-module packages like NumPy or the Python Imaging Library from having to worry about each other’s module names.
Suppose you want to design a collection of modules (a “package”) for the uniform handling of sound files and sound data. There are many different sound file formats (usually recognized by their extension, for example: .wav, .aiff, .au), so you may need to create and maintain a growing collection of modules for the conversion between the various file formats. There are also many different operations you might want to perform on sound data (such as mixing, adding echo, applying an equalizer function, creating an artificial stereo effect), so in addition you will be writing a never-ending stream of modules to perform these operations. Here’s a possible structure for your package (expressed in terms of a hierarchical filesystem):
Package maya_util structure
```
./maya_util:
init.py db json misc.py
./maya_util/db:
init.py read.py write.py
./maya_util/json:
init.py read.py write.py
```
The __init__.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as string, from unintentionally hiding valid modules that occur later on the module search path. In the simplest case, __init__.py can just be an empty file, but it can also execute initialization code for the package or set the __all__ variable, described later.
Users of the package can import individual modules from the package, for example:
End of explanation
"""
|
AtmaMani/pyChakras
|
udemy_ml_bootcamp/Data-Capstone-Projects/911 Calls Data Capstone Project - Solutions.ipynb
|
mit
|
import numpy as np
import pandas as pd
"""
Explanation: 911 Calls Capstone Project - Solutions
For this capstone project we will be analyzing some 911 call data from Kaggle. The data contains the following fields:
lat : String variable, Latitude
lng: String variable, Longitude
desc: String variable, Description of the Emergency Call
zip: String variable, Zipcode
title: String variable, Title
timeStamp: String variable, YYYY-MM-DD HH:MM:SS
twp: String variable, Township
addr: String variable, Address
e: String variable, Dummy variable (always 1)
Just go along with this notebook and try to complete the instructions or answer the questions in bold using your Python and Data Science skills!
Data and Setup
Import numpy and pandas
End of explanation
"""
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
"""
Explanation: Import visualization libraries and set %matplotlib inline.
End of explanation
"""
df = pd.read_csv('911.csv')
"""
Explanation: Read in the csv file as a dataframe called df
End of explanation
"""
df.info()
"""
Explanation: Check the info() of the df
End of explanation
"""
df.head(3)
"""
Explanation: Check the head of df
End of explanation
"""
df['zip'].value_counts().head(5)
"""
Explanation: Basic Questions
What are the top 5 zipcodes for 911 calls?
End of explanation
"""
df['twp'].value_counts().head(5)
"""
Explanation: What are the top 5 townships (twp) for 911 calls?
End of explanation
"""
df['title'].nunique()
"""
Explanation: Take a look at the 'title' column, how many unique title codes are there?
End of explanation
"""
df['Reason'] = df['title'].apply(lambda title: title.split(':')[0])
"""
Explanation: Creating new features
In the titles column there are "Reasons/Departments" specified before the title code. These are EMS, Fire, and Traffic. Use .apply() with a custom lambda expression to create a new column called "Reason" that contains this string value.
For example, if the title column value is EMS: BACK PAINS/INJURY , the Reason column value would be EMS.
End of explanation
"""
df['Reason'].value_counts()
"""
Explanation: What is the most common Reason for a 911 call based off of this new column?
End of explanation
"""
sns.countplot(x='Reason',data=df,palette='viridis')
"""
Explanation: Now use seaborn to create a countplot of 911 calls by Reason.
End of explanation
"""
type(df['timeStamp'].iloc[0])
"""
Explanation: Now let us begin to focus on time information. What is the data type of the objects in the timeStamp column?
End of explanation
"""
df['timeStamp'] = pd.to_datetime(df['timeStamp'])
"""
Explanation: You should have seen that these timestamps are still strings. Use pd.to_datetime to convert the column from strings to DateTime objects.
End of explanation
"""
df['Hour'] = df['timeStamp'].apply(lambda time: time.hour)
df['Month'] = df['timeStamp'].apply(lambda time: time.month)
df['Day of Week'] = df['timeStamp'].apply(lambda time: time.dayofweek)
"""
Explanation: You can now grab specific attributes from a Datetime object by calling them. For example:
time = df['timeStamp'].iloc[0]
time.hour
You can use Jupyter's tab method to explore the various attributes you can call. Now that the timestamp column are actually DateTime objects, use .apply() to create 3 new columns called Hour, Month, and Day of Week. You will create these columns based off of the timeStamp column, reference the solutions if you get stuck on this step.
End of explanation
"""
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
df['Day of Week'] = df['Day of Week'].map(dmap)
"""
Explanation: Notice how the Day of Week is an integer 0-6. Use the .map() with this dictionary to map the actual string names to the day of the week:
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
End of explanation
"""
sns.countplot(x='Day of Week',data=df,hue='Reason',palette='viridis')
# To relocate the legend
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
"""
Explanation: Now use seaborn to create a countplot of the Day of Week column with the hue based off of the Reason column.
End of explanation
"""
sns.countplot(x='Month',data=df,hue='Reason',palette='viridis')
# To relocate the legend
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
"""
Explanation: Now do the same for Month:
End of explanation
"""
# It is missing some months! 9,10, and 11 are not there.
"""
Explanation: Did you notice something strange about the Plot?
End of explanation
"""
byMonth = df.groupby('Month').count()
byMonth.head()
"""
Explanation: You should have noticed it was missing some Months, let's see if we can maybe fill in this information by plotting the information in another way, possibly a simple line plot that fills in the missing months, in order to do this, we'll need to do some work with pandas...
Now create a gropuby object called byMonth, where you group the DataFrame by the month column and use the count() method for aggregation. Use the head() method on this returned DataFrame.
End of explanation
"""
# Could be any column
byMonth['twp'].plot()
"""
Explanation: Now create a simple plot off of the dataframe indicating the count of calls per month.
End of explanation
"""
sns.lmplot(x='Month',y='twp',data=byMonth.reset_index())
"""
Explanation: Now see if you can use seaborn's lmplot() to create a linear fit on the number of calls per month. Keep in mind you may need to reset the index to a column.
End of explanation
"""
df['Date']=df['timeStamp'].apply(lambda t: t.date())
"""
Explanation: Create a new column called 'Date' that contains the date from the timeStamp column. You'll need to use apply along with the .date() method.
End of explanation
"""
df.groupby('Date').count()['twp'].plot()
plt.tight_layout()
"""
Explanation: Now groupby this Date column with the count() aggregate and create a plot of counts of 911 calls.
End of explanation
"""
df[df['Reason']=='Traffic'].groupby('Date').count()['twp'].plot()
plt.title('Traffic')
plt.tight_layout()
df[df['Reason']=='Fire'].groupby('Date').count()['twp'].plot()
plt.title('Fire')
plt.tight_layout()
df[df['Reason']=='EMS'].groupby('Date').count()['twp'].plot()
plt.title('EMS')
plt.tight_layout()
"""
Explanation: Now recreate this plot but create 3 separate plots with each plot representing a Reason for the 911 call
End of explanation
"""
dayHour = df.groupby(by=['Day of Week','Hour']).count()['Reason'].unstack()
dayHour.head()
"""
Explanation: Now let's move on to creating heatmaps with seaborn and our data. We'll first need to restructure the dataframe so that the columns become the Hours and the Index becomes the Day of the Week. There are lots of ways to do this, but I would recommend trying to combine groupby with an unstack method. Reference the solutions if you get stuck on this!
End of explanation
"""
plt.figure(figsize=(12,6))
sns.heatmap(dayHour,cmap='viridis')
"""
Explanation: Now create a HeatMap using this new DataFrame.
End of explanation
"""
sns.clustermap(dayHour,cmap='viridis')
"""
Explanation: Now create a clustermap using this DataFrame.
End of explanation
"""
dayMonth = df.groupby(by=['Day of Week','Month']).count()['Reason'].unstack()
dayMonth.head()
plt.figure(figsize=(12,6))
sns.heatmap(dayMonth,cmap='viridis')
sns.clustermap(dayMonth,cmap='viridis')
"""
Explanation: Now repeat these same plots and operations, for a DataFrame that shows the Month as the column.
End of explanation
"""
|
azhurb/deep-learning
|
reinforcement/Q-learning-cart.ipynb
|
mit
|
import gym
import tensorflow as tf
import numpy as np
"""
Explanation: Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
End of explanation
"""
# Create the Cart-Pole game environment
env = gym.make('CartPole-v0')
"""
Explanation: Note: Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included gym as a submodule, so you can run git submodule --init --recursive to pull the contents into the gym repo.
End of explanation
"""
env.reset()
rewards = []
for _ in range(100):
env.render()
state, reward, done, info = env.step(env.action_space.sample()) # take a random action
rewards.append(reward)
if done:
rewards = []
env.reset()
"""
Explanation: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
End of explanation
"""
env.close()
"""
Explanation: To shut the window showing the simulation, use env.close().
End of explanation
"""
print(rewards[-20:])
"""
Explanation: If you ran the simulation above, we can look at the rewards:
End of explanation
"""
class QNetwork:
def __init__(self, learning_rate=0.01, state_size=4,
action_size=2, hidden_size=10,
name='QNetwork'):
# state inputs to the Q-network
with tf.variable_scope(name):
self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')
# One hot encode the actions to later choose the Q-value for the action
self.actions_ = tf.placeholder(tf.int32, [None], name='actions')
one_hot_actions = tf.one_hot(self.actions_, action_size)
# Target Q values for training
self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')
# ReLU hidden layers
self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)
self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)
# Linear output layer
self.output = tf.contrib.layers.fully_connected(self.fc2, action_size,
activation_fn=None)
### Train with loss (targetQ - Q)^2
# output has length 2, for two actions. This next line chooses
# one value from output (per row) according to the one-hot encoded actions.
self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)
self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))
self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
"""
Explanation: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Q-Network
We train our Q-learning agent using the Bellman Equation:
$$
Q(s, a) = r + \gamma \max{Q(s', a')}
$$
where $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$.
Before we used this equation to learn values for a Q-table. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function.
<img src="assets/deep-q-learning.png" width=450px>
Now, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers.
<img src="assets/q-network.png" width=550px>
As I showed before, we can define our targets for training as $\hat{Q}(s,a) = r + \gamma \max{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$.
For this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights.
Below is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.
End of explanation
"""
from collections import deque
class Memory():
def __init__(self, max_size = 1000):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def sample(self, batch_size):
idx = np.random.choice(np.arange(len(self.buffer)),
size=batch_size,
replace=False)
return [self.buffer[ii] for ii in idx]
"""
Explanation: Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
End of explanation
"""
train_episodes = 1000 # max number of episodes to learn from
max_steps = 200 # max steps in an episode
gamma = 0.99 # future reward discount
# Exploration parameters
explore_start = 1.0 # exploration probability at start
explore_stop = 0.01 # minimum exploration probability
decay_rate = 0.0001 # exponential decay rate for exploration prob
# Network parameters
hidden_size = 64 # number of units in each Q-network hidden layer
learning_rate = 0.0001 # Q-network learning rate
# Memory parameters
memory_size = 10000 # memory capacity
batch_size = 20 # experience mini-batch size
pretrain_length = batch_size # number experiences to pretrain the memory
tf.reset_default_graph()
mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)
"""
Explanation: Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:
Initialize the memory $D$
Initialize the action-value network $Q$ with random weights
For episode = 1, $M$ do
For $t$, $T$ do
With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$
Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$
Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$
Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$
Set $\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max{a'}{Q(s'_j, a')}$
Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$
endfor
endfor
Hyperparameters
One of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.
End of explanation
"""
# Initialize the simulation
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
memory = Memory(max_size=memory_size)
# Make a bunch of random actions and store the experiences
for ii in range(pretrain_length):
# Uncomment the line below to watch the simulation
# env.render()
# Make a random action
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
if done:
# The simulation fails so no next state
next_state = np.zeros(state.shape)
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
"""
Explanation: Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
End of explanation
"""
# Now train with experiences
saver = tf.train.Saver()
rewards_list = []
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
step = 0
for ep in range(1, train_episodes):
total_reward = 0
t = 0
while t < max_steps:
step += 1
# Uncomment this next line to watch the training
# env.render()
# Explore or Exploit
explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step)
if explore_p > np.random.rand():
# Make a random action
action = env.action_space.sample()
else:
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
total_reward += reward
if done:
# the episode ends so no next state
next_state = np.zeros(state.shape)
t = max_steps
print('Episode: {}'.format(ep),
'Total reward: {}'.format(total_reward),
'Training loss: {:.4f}'.format(loss),
'Explore P: {:.4f}'.format(explore_p))
rewards_list.append((ep, total_reward))
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
t += 1
# Sample mini-batch from memory
batch = memory.sample(batch_size)
states = np.array([each[0] for each in batch])
actions = np.array([each[1] for each in batch])
rewards = np.array([each[2] for each in batch])
next_states = np.array([each[3] for each in batch])
# Train network
target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})
# Set target_Qs to 0 for states where episode ends
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
target_Qs[episode_ends] = (0, 0)
targets = rewards + gamma * np.max(target_Qs, axis=1)
loss, _ = sess.run([mainQN.loss, mainQN.opt],
feed_dict={mainQN.inputs_: states,
mainQN.targetQs_: targets,
mainQN.actions_: actions})
saver.save(sess, "checkpoints/cartpole.ckpt")
"""
Explanation: Training
Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
"""
Explanation: Visualizing training
Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
End of explanation
"""
test_episodes = 10
test_max_steps = 400
env.reset()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
for ep in range(1, test_episodes):
t = 0
while t < test_max_steps:
env.render()
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
if done:
t = test_max_steps
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
state = next_state
t += 1
env.close()
"""
Explanation: Testing
Let's checkout how our trained agent plays the game.
End of explanation
"""
|
wbinventor/openmc
|
examples/jupyter/mdgxs-part-ii.ipynb
|
mit
|
%matplotlib inline
import math
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.mgxs
"""
Explanation: This IPython Notebook illustrates the use of the openmc.mgxs.Library class. The Library class is designed to automate the calculation of multi-group cross sections for use cases with one or more domains, cross section types, and/or nuclides. In particular, this Notebook illustrates the following features:
Calculation of multi-energy-group and multi-delayed-group cross sections for a fuel assembly
Automated creation, manipulation and storage of MGXS with openmc.mgxs.Library
Steady-state pin-by-pin delayed neutron fractions (beta) for each delayed group.
Generation of surface currents on the interfaces and surfaces of a Mesh.
Generate Input Files
End of explanation
"""
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
"""
Explanation: First we need to define materials that will be used in the problem: fuel, water, and cladding.
End of explanation
"""
# Create a materials collection and export to XML
materials = openmc.Materials((fuel, water, zircaloy))
materials.export_to_xml()
"""
Explanation: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
"""
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(R=0.39218)
clad_outer_radius = openmc.ZCylinder(R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
"""
Explanation: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
End of explanation
"""
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
"""
Explanation: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
"""
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
"""
Explanation: Likewise, we can construct a control rod guide tube with the same surfaces.
End of explanation
"""
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
"""
Explanation: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
End of explanation
"""
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Create universes array with the fuel pin and guide tube universes
universes = np.tile(fuel_pin_universe, (17,17))
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
"""
Explanation: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
End of explanation
"""
# Create root Cell
root_cell = openmc.Cell(name='root cell', fill=assembly)
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
"""
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
"""
# Create Geometry and export to XML
geometry = openmc.Geometry(root_universe)
geometry.export_to_xml()
"""
Explanation: We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
"""
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 2500
# Instantiate a Settings object
settings = openmc.Settings()
settings.batches = batches
settings.inactive = inactive
settings.particles = particles
settings.output = {'tallies': False}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings.export_to_xml()
"""
Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
End of explanation
"""
# Plot our geometry
plot = openmc.Plot.from_geometry(geometry)
plot.pixels = (250, 250)
plot.color_by = 'material'
openmc.plot_inline(plot)
"""
Explanation: Let us also create a plot to verify that our fuel assembly geometry was created successfully.
End of explanation
"""
# Instantiate a 20-group EnergyGroups object
energy_groups = openmc.mgxs.EnergyGroups()
energy_groups.group_edges = np.logspace(-3, 7.3, 21)
# Instantiate a 1-group EnergyGroups object
one_group = openmc.mgxs.EnergyGroups()
one_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])
"""
Explanation: As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 20-energy-group and 1-energy-group.
End of explanation
"""
# Instantiate a tally mesh
mesh = openmc.Mesh(mesh_id=1)
mesh.type = 'regular'
mesh.dimension = [17, 17, 1]
mesh.lower_left = [-10.71, -10.71, -10000.]
mesh.width = [1.26, 1.26, 20000.]
# Initialize an 20-energy-group and 6-delayed-group MGXS Library
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = energy_groups
mgxs_lib.num_delayed_groups = 6
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['total', 'transport', 'nu-scatter matrix', 'kappa-fission', 'inverse-velocity', 'chi-prompt',
'prompt-nu-fission', 'chi-delayed', 'delayed-nu-fission', 'beta']
# Specify a "mesh" domain type for the cross section tally filters
mgxs_lib.domain_type = 'mesh'
# Specify the mesh domain over which to compute multi-group cross sections
mgxs_lib.domains = [mesh]
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
# Instantiate a current tally
mesh_filter = openmc.MeshFilter(mesh)
current_tally = openmc.Tally(name='current tally')
current_tally.scores = ['current']
current_tally.filters = [mesh_filter]
# Add current tally to the tallies file
tallies_file.append(current_tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
"""
Explanation: Next, we will instantiate an openmc.mgxs.Library for the energy and delayed groups with our the fuel assembly geometry.
End of explanation
"""
# Run OpenMC
openmc.run()
"""
Explanation: Now, we can run OpenMC to generate the cross sections.
End of explanation
"""
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
"""
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
"""
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
# Extrack the current tally separately
current_tally = sp.get_tally(name='current tally')
"""
Explanation: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
"""
# Set the time constants for the delayed precursors (in seconds^-1)
precursor_halflife = np.array([55.6, 24.5, 16.3, 2.37, 0.424, 0.195])
precursor_lambda = math.log(2.0) / precursor_halflife
beta = mgxs_lib.get_mgxs(mesh, 'beta')
# Create a tally object with only the delayed group filter for the time constants
beta_filters = [f for f in beta.xs_tally.filters if type(f) is not openmc.DelayedGroupFilter]
lambda_tally = beta.xs_tally.summation(nuclides=beta.xs_tally.nuclides)
for f in beta_filters:
lambda_tally = lambda_tally.summation(filter_type=type(f), remove_filter=True) * 0. + 1.
# Set the mean of the lambda tally and reshape to account for nuclides and scores
lambda_tally._mean = precursor_lambda
lambda_tally._mean.shape = lambda_tally.std_dev.shape
# Set a total nuclide and lambda score
lambda_tally.nuclides = [openmc.Nuclide(name='total')]
lambda_tally.scores = ['lambda']
delayed_nu_fission = mgxs_lib.get_mgxs(mesh, 'delayed-nu-fission')
# Use tally arithmetic to compute the precursor concentrations
precursor_conc = beta.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \
delayed_nu_fission.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / lambda_tally
# The difference is a derived tally which can generate Pandas DataFrames for inspection
precursor_conc.get_pandas_dataframe().head(10)
"""
Explanation: Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations
Finally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a "derived" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to compute the delayed neutron precursor concentrations using the Beta and DelayedNuFissionXS objects. The delayed neutron precursor concentrations are modeled using the following equations:
$$\frac{\partial}{\partial t} C_{k,d} (t) = \int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r} \beta_{k,d} (t) \nu_d \sigma_{f,x}(\mathbf{r},E',t)\Phi(\mathbf{r},E',t) - \lambda_{d} C_{k,d} (t) $$
$$C_{k,d} (t=0) = \frac{1}{\lambda_{d}} \int_{0}^{\infty}\mathrm{d}E'\int_{\mathbf{r} \in V_{k}}\mathrm{d}\mathbf{r} \beta_{k,d} (t=0) \nu_d \sigma_{f,x}(\mathbf{r},E',t=0)\Phi(\mathbf{r},E',t=0) $$
End of explanation
"""
current_tally.get_pandas_dataframe().head(10)
"""
Explanation: Another useful feature of the Python API is the ability to extract the surface currents for the interfaces and surfaces of a mesh. We can inspect the currents for the mesh by getting the pandas dataframe.
End of explanation
"""
# Extract the energy-condensed delayed neutron fraction tally
beta_by_group = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type='energy', remove_filter=True)
beta_by_group.mean.shape = (17, 17, 6)
beta_by_group.mean[beta_by_group.mean == 0] = np.nan
# Plot the betas
plt.figure(figsize=(18,9))
fig = plt.subplot(231)
plt.imshow(beta_by_group.mean[:,:,0], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 1')
fig = plt.subplot(232)
plt.imshow(beta_by_group.mean[:,:,1], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 2')
fig = plt.subplot(233)
plt.imshow(beta_by_group.mean[:,:,2], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 3')
fig = plt.subplot(234)
plt.imshow(beta_by_group.mean[:,:,3], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 4')
fig = plt.subplot(235)
plt.imshow(beta_by_group.mean[:,:,4], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 5')
fig = plt.subplot(236)
plt.imshow(beta_by_group.mean[:,:,5], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 6')
"""
Explanation: Cross Section Visualizations
In addition to inspecting the data in the tallies by getting the pandas dataframe, we can also plot the tally data on the domain mesh. Below is the delayed neutron fraction tallied in each mesh cell for each delayed group.
End of explanation
"""
|
psas/liquid-engine-analysis
|
Simulation_and_Optimization/Launcher_Settings_Optimization.ipynb
|
gpl-3.0
|
# all of our comparisons are ratios instead of subtractions because
# it's normalized, instead of dependent on magnitudes of variables and constraints
def objective_additive(var, cons):
return np.linalg.norm(var - cons)**2 / 2
# minimize this, **2 makes it well behaved w.r.t. when var=cons
def objective(var, cons):
return (var/cons)**2 / 2
# **2 because i like it more than abs(), but that also works
def exact(var, cons):
return (var/cons - 1)**2 / 2
# this is your basic exterior penalty, either punishes for unfeasibility or is inactive
def exterior(var, cons, good_if_less_than=False):
if good_if_less_than:
return max(0, var/cons - 1)**2 / 2
else:
return max(0, -(var/cons - 1))**2 / 2
# this barrier function restricts our objective function to the strictly feasible region
# make rockets great again, build that wall, etc, watch out for undefined operations
def barrier(var, cons, int_point=False, good_if_less_than=True):
global dbz
try: # just in case we accidentally leave feasible region
if not int_point:
if good_if_less_than:
return -log(-(var/cons - 1))
else:
return -log(var/cons - 1)
elif int_point:
def interior(g): return 1/g # in case we don't like logarithms, which is a mistake
if good_if_less_than:
return -interior(var/cons - 1)
else:
return -interior(-(var/cons - 1))
except:
return float('inf') # ordinarily, this is bad practice since it could confuse the optimizer
# however, since this is a barrier function not an ordinary penalty, i think it's fine
"""
Explanation: The purpose of this notebook is to determine the optimal launch azimuth and elevation angles, in order to maximize the likelihood of landing in a particular region. Eventually, this should be expanded to enable input of day-of wind measurements so that we can determine these settings on the day of the launch. We may need to optimize the trajectory simulation code for this purpose, perhaps with less degrees of freedom, since it is currently fairly slow.
This program has essentially the same structure as the MDO and differs only in how trajectory performance is evaluated and which parameters are held constant or optimized over. Examination of the differences should enable one to make similar programs for different but related purposes.
Functions of Merit
We chose to abstract all of the functions used within the merit function for increased flexibility, ease of reading, and later utility. We use convex functions four our optimization, but we don't use much of the theory of convex optimization.
objective_additive is arbitrarily constructed. It is
appropriate for vector-valued measurements,
not normalized,
squared to reward (or punish) relative to the distance from nominal value, and
divided by two for aesthetic reasons.
objective is arbitrarily constructed. It is
normalized (by a somewhat arbitrary constant) to bring it into the same range as constraints,
squared to reward (or punish) relative to the distance from nominal value, and
divided by two so that our nominal value is 0.5 instead of 1.0.
exact is less arbitrarily constructed. It is
squared to be horizontally symmetric (which could also be obtained by absolute value),
determined by the distance from a constant, and
divided by two is for aesthetic value.
exterior is not particularly arbitrary. It is
boolean so that we can specify whether it is minimizing or maximizing its variable,
0 when the inequality is satisfied, otherwise it is just as punishing as exact.
barrier comes in two flavors, one of which is not used here. It is
boolean so that we can specify whether it is a lower or an upper bound,
completely inviolable, unlike exact and exterior penalties.
Technically logarithmic barrier functions allow negative penalties (i.e. rewards), but since we use upper and lower altitude barriers, it is impossible that their sum be less than 0. If the optimizer steps outside of the apogee window, the barrier functions can attempt undefined operations (specifically, taking the logarithm of a negative number), so some error handling is required to return an infinite value in those cases. Provided that the initial design is within the feasible region, the optimizer will not become disoriented by infinite values.
End of explanation
"""
# this manages all our constraints
# penalty parameters: mu -> 0 and rho -> infinity
def penalty(alt, mu, rho):
b = [#barrier(alt, CONS_ALT, int_point=False, good_if_less_than=False)
]
eq = []
ext = [exterior(alt, CONS_ALT-3000, good_if_less_than=False)
]
return mu*sum(b) + rho*(sum(eq) + sum(ext))
# Pseudo-objective merit function
# x is array of design parameters, n is index of penalty and barrier functions
def cost(x, nominal):
global allvectors, allobjfun
# get trajectory data
sim = trajectory(M_PROP, MDOT, P_E,
THROTTLE_WINDOW, MIN_THROTTLE,
RCS_MDOT, RCS_P_E, RCS_P_CH,
BALLAST, FIN_ROOT, FIN_TIP, FIN_SWEEP_ANGLE, FIN_SEMISPAN, FIN_THICKNESS, CON_NOSE_L,
LOX_TANK_P, IPA_TANK_P, RIB_T, NUM_RADL_DVSNS,
AIRFRM_IN_RAD, IPA_WT, OF, ENG_P_CH, ENG_T_CH, ENG_KE, ENG_MM,
[0, 0, x[0], x[1], False, 0, 0, 0, 0, 0, 0, True],
0.025, True, 0.005, True, True)
# either minimize the distance from nominal impact point
if TARGET:
#nominal = np.array([nominal, kludge])
obj_func = objective_additive(sim.impact, nominal)
# or maximize distance from launch point
else:
obj_func = - objective_additive(sim.impact, sim.env.launch_pt)
pen_func = penalty(sim.LV4.apogee, MU_0 / (2**1), RHO_0 * (2**1))
# add objective and penalty functions
merit_func = obj_func + pen_func
allvectors.append(x) # maintains a list of every design, side effect
allobjfun.append(merit_func)
#print("vec:", x,'\t', "impact:", sim.impact, '\t', "alt:", sim.LV4.apogee)
return merit_func
# we want to iterate our optimizer for theoretical "convergence" reasons (given some assumptions)
# n = number of sequential iterations
def iterate(func, x_0, n, nominal):
x = x_0
designs = []
for i in range(n):
print("Iteration " + str(i+1) + ":")
# this minimizer uses simplex method
res = minimize(func, x, args=(nominal), method='nelder-mead', options={'disp': True})
x = res.x # feed optimal design vec into next iteration
designs.append(res.x) # we want to compare sequential objectives
return x
# this is for experimenting with stochastic optimization, which takes much longer but may yield more global results.
def breed_rockets(func, nominal):
res = differential_evolution(func=func, bounds=[(0, 360), (-10, 1)], args=((nominal)),
strategy='best1bin', popsize=80, mutation=(.1, .8), recombination=.05,
updating='immediate', disp=True, atol=0.05, tol=0.05,
polish=True,workers=-1)
return res.x
def rbf_optimizer(func, nominal):
bb = rbfopt.RbfoptUserBlackBox(2,
np.array([0, -5]),
np.array([360, 1]),
np.array(['R']*2), lambda x: func(x, nominal))
settings = rbfopt.RbfoptSettings(minlp_solver_path='/home/cory/Downloads/Bonmin-1.8.8/build/bin/bonmin',
nlp_solver_path='/home/cory/Downloads/Bonmin-1.8.8/build/bin/ipopt',
max_evaluations=150, eps_impr=1.0e-7)
alg = rbfopt.RbfoptAlgorithm(settings, bb)
val, x, itercount, evalcount, fast_evalcount = alg.optimize()
return x
"""
Explanation: Optimization Problem
Given a design vector $x$ and the iteration number $n$ our merit function cost runs a trajectory simulation and evaluates the quality of that rocket. We keep track of each design and its merit value for later visualization, hence why global variables are used.
We run an iterative sequence of optimization routines with a decreasing barrier function and increasing penalty functions so that the optimization can range over a larger portion of the design space away from its boundaries before settling into local minima closer to the boundary.
End of explanation
"""
# this either maximizes distance from launch site or minimizes distance from nominal impact point
if __name__ == '__main__':
if TARGET:
target_pt = np.array([32.918255, -106.349477])
else:
target_pt = None
if SIMPLEX:
x0 = np.array([AZ_PERTURB, EL_PERTURB])
# feed initial design into iterative optimizer, get most (locally) feasible design
x = iterate(cost, x0, 1, nominal = target_pt)
else:
# probe design space, darwin style. if design space has more than 3 dimensions, you might need this. takes forever.
x = rbf_optimizer(cost, nominal = target_pt)
print("Optimization done!")
if __name__ == '__main__':
# Rename the optimized output for convenience
az_perturb = x[0]
el_perturb = x[1]
# get trajectory info from optimal design
sim = trajectory(M_PROP, MDOT, P_E,
THROTTLE_WINDOW, MIN_THROTTLE,
RCS_MDOT, RCS_P_E, RCS_P_CH,
BALLAST, FIN_ROOT, FIN_TIP, FIN_SWEEP_ANGLE, FIN_SEMISPAN, FIN_THICKNESS, CON_NOSE_L,
LOX_TANK_P, IPA_TANK_P, RIB_T, NUM_RADL_DVSNS,
AIRFRM_IN_RAD, IPA_WT, OF, ENG_P_CH, ENG_T_CH, ENG_KE, ENG_MM,
[0, 0, az_perturb, el_perturb, False, 0, 0, 0, 0, 0, 0, True],
0.025, False, 0.005, True, False)
print("Azimuth Perturbation:", az_perturb)
print("Elevation Perturbation:", el_perturb)
print("Launch point", sim.env.launch_pt)
print("Impact point", sim.impact)
print()
textlist = print_results(sim, False)
# draw pretty pictures of optimized trajectory
rocket_plot(sim.t, sim.alt, sim.v, sim.a, sim.thrust,
sim.dyn_press, sim.Ma, sim.m, sim.p_a, sim.drag, sim.throttle, sim.fin_flutter, sim, False, None, None)
# get/print info about our trajectory and rocket
for line in textlist:
print(line)
# draw more pretty pictures, but of the optimizer guts
design_grapher(allvectors)
"""
Explanation: Top-Level of Optimization Routine
Here's where the magic happens. This code block runs the iterative optimization and provides details from our optimized trajectory.
End of explanation
"""
|
eusebioaguilera/scalablemachinelearning
|
Lab02/ML_lab2_word_count_student.ipynb
|
gpl-3.0
|
labVersion = 'cs190_week2_word_count_v_1_0'
"""
Explanation: +
Word Count Lab: Building a word count application
This lab will build on the techniques covered in the Spark tutorial to develop a simple word count application. The volume of unstructured text in existence is growing dramatically, and Spark is an excellent tool for analyzing this type of data. In this lab, we will write code that calculates the most common words in the Complete Works of William Shakespeare retrieved from Project Gutenberg. This could also be scaled to find the most common words on the Internet.
During this lab we will cover:
Part 1: Creating a base RDD and pair RDDs
Part 2: Counting with pair RDDs
Part 3: Finding unique words and a mean value
Part 4: Apply word count to a file
Note that, for reference, you can look up the details of the relevant methods in Spark's Python API
End of explanation
"""
wordsList = ['cat', 'elephant', 'rat', 'rat', 'cat']
wordsRDD = sc.parallelize(wordsList, 4)
# Print out the type of wordsRDD
print type(wordsRDD)
"""
Explanation: Part 1: Creating a base RDD and pair RDDs
In this part of the lab, we will explore creating a base RDD with parallelize and using pair RDDs to count words.
(1a) Create a base RDD
We'll start by generating a base RDD by using a Python list and the sc.parallelize method. Then we'll print out the type of the base RDD.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
def makePlural(word):
"""Adds an 's' to `word`.
Note:
This is a simple function that only adds an 's'. No attempt is made to follow proper
pluralization rules.
Args:
word (str): A string.
Returns:
str: A string with 's' added to it.
"""
return word+'s'
print makePlural('cat')
# One way of completing the function
def makePlural(word):
return word + 's'
print makePlural('cat')
# Load in the testing code and check to see if your answer is correct
# If incorrect it will report back '1 test failed' for each failed test
# Make sure to rerun any cell you change before trying the test again
from test_helper import Test
# TEST Pluralize and test (1b)
Test.assertEquals(makePlural('rat'), 'rats', 'incorrect result: makePlural does not add an s')
"""
Explanation: (1b) Pluralize and test
Let's use a map() transformation to add the letter 's' to each string in the base RDD we just created. We'll define a Python function that returns the word with an 's' at the end of the word. Please replace <FILL IN> with your solution. If you have trouble, the next cell has the solution. After you have defined makePlural you can run the third cell which contains a test. If you implementation is correct it will print 1 test passed.
This is the general form that exercises will take, except that no example solution will be provided. Exercises will include an explanation of what is expected, followed by code cells where one cell will have one or more <FILL IN> sections. The cell that needs to be modified will have # TODO: Replace <FILL IN> with appropriate code on its first line. Once the <FILL IN> sections are updated and the code is run, the test cell can then be run to verify the correctness of your solution. The last code cell before the next markdown section will contain the tests.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
pluralRDD = wordsRDD.map(makePlural)
print pluralRDD.collect()
# TEST Apply makePlural to the base RDD(1c)
Test.assertEquals(pluralRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'],
'incorrect values for pluralRDD')
"""
Explanation: (1c) Apply makePlural to the base RDD
Now pass each item in the base RDD into a map() transformation that applies the makePlural() function to each element. And then call the collect() action to see the transformed RDD.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
pluralLambdaRDD = wordsRDD.map(lambda x: x + 's')
print pluralLambdaRDD.collect()
# TEST Pass a lambda function to map (1d)
Test.assertEquals(pluralLambdaRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'],
'incorrect values for pluralLambdaRDD (1d)')
"""
Explanation: (1d) Pass a lambda function to map
Let's create the same RDD using a lambda function.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
pluralLengths = (pluralRDD
.map(lambda x : len(x))
.collect())
print pluralLengths
# TEST Length of each word (1e)
Test.assertEquals(pluralLengths, [4, 9, 4, 4, 4],
'incorrect values for pluralLengths')
"""
Explanation: (1e) Length of each word
Now use map() and a lambda function to return the number of characters in each word. We'll collect this result directly into a variable.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
wordPairs = wordsRDD.map(lambda x : (x, 1))
print wordPairs.collect()
# TEST Pair RDDs (1f)
Test.assertEquals(wordPairs.collect(),
[('cat', 1), ('elephant', 1), ('rat', 1), ('rat', 1), ('cat', 1)],
'incorrect value for wordPairs')
"""
Explanation: (1f) Pair RDDs
The next step in writing our word counting program is to create a new type of RDD, called a pair RDD. A pair RDD is an RDD where each element is a pair tuple (k, v) where k is the key and v is the value. In this example, we will create a pair consisting of ('<word>', 1) for each word element in the RDD.
We can create the pair RDD using the map() transformation with a lambda() function to create a new RDD.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
# Note that groupByKey requires no parameters
wordsGrouped = wordPairs.groupByKey()
for key, value in wordsGrouped.collect():
print '{0}: {1}'.format(key, list(value))
# TEST groupByKey() approach (2a)
Test.assertEquals(sorted(wordsGrouped.mapValues(lambda x: list(x)).collect()),
[('cat', [1, 1]), ('elephant', [1]), ('rat', [1, 1])],
'incorrect value for wordsGrouped')
"""
Explanation: Part 2: Counting with pair RDDs
Now, let's count the number of times a particular word appears in the RDD. There are multiple ways to perform the counting, but some are much less efficient than others.
A naive approach would be to collect() all of the elements and count them in the driver program. While this approach could work for small datasets, we want an approach that will work for any size dataset including terabyte- or petabyte-sized datasets. In addition, performing all of the work in the driver program is slower than performing it in parallel in the workers. For these reasons, we will use data parallel operations.
(2a) groupByKey() approach
An approach you might first consider (we'll see shortly that there are better ways) is based on using the groupByKey() transformation. As the name implies, the groupByKey() transformation groups all the elements of the RDD with the same key into a single list in one of the partitions. There are two problems with using groupByKey():
The operation requires a lot of data movement to move all the values into the appropriate partitions.
The lists can be very large. Consider a word count of English Wikipedia: the lists for common words (e.g., the, a, etc.) would be huge and could exhaust the available memory in a worker.
Use groupByKey() to generate a pair RDD of type ('word', iterator).
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
wordCountsGrouped = wordsGrouped.map(lambda (k, v) : (k, sum(v)))
print wordCountsGrouped.collect()
# TEST Use groupByKey() to obtain the counts (2b)
Test.assertEquals(sorted(wordCountsGrouped.collect()),
[('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect value for wordCountsGrouped')
"""
Explanation: (2b) Use groupByKey() to obtain the counts
Using the groupByKey() transformation creates an RDD containing 3 elements, each of which is a pair of a word and a Python iterator.
Now sum the iterator using a map() transformation. The result should be a pair RDD consisting of (word, count) pairs.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
# Note that reduceByKey takes in a function that accepts two values and returns a single value
wordCounts = wordPairs.reduceByKey(lambda x, y : x+y)
print wordCounts.collect()
# TEST Counting using reduceByKey (2c)
Test.assertEquals(sorted(wordCounts.collect()), [('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect value for wordCounts')
"""
Explanation: (2c) Counting using reduceByKey
A better approach is to start from the pair RDD and then use the reduceByKey() transformation to create a new pair RDD. The reduceByKey() transformation gathers together pairs that have the same key and applies the function provided to two values at a time, iteratively reducing all of the values to a single value. reduceByKey() operates by applying the function first within each partition on a per-key basis and then across the partitions, allowing it to scale efficiently to large datasets.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
wordCountsCollected = (wordsRDD
.map(lambda x : (x, 1))
.reduceByKey(lambda x, y: x + y)
.collect())
print wordCountsCollected
# TEST All together (2d)
Test.assertEquals(sorted(wordCountsCollected), [('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect value for wordCountsCollected')
"""
Explanation: (2d) All together
The expert version of the code performs the map() to pair RDD, reduceByKey() transformation, and collect in one statement.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
uniqueWords = len(wordCountsCollected)
print uniqueWords
# TEST Unique words (3a)
Test.assertEquals(uniqueWords, 3, 'incorrect count of uniqueWords')
"""
Explanation: Part 3: Finding unique words and a mean value
(3a) Unique words
Calculate the number of unique words in wordsRDD. You can use other RDDs that you have already created to make this easier.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
from operator import add
totalCount = (wordCounts
.map(lambda (k, v) : v)
.reduce(add))
average = totalCount / float(len(wordCounts.collect()))
print totalCount
print round(average, 2)
# TEST Mean using reduce (3b)
Test.assertEquals(round(average, 2), 1.67, 'incorrect value of average')
"""
Explanation: (3b) Mean using reduce
Find the mean number of words per unique word in wordCounts.
Use a reduce() action to sum the counts in wordCounts and then divide by the number of unique words. First map() the pair RDD wordCounts, which consists of (key, value) pairs, to an RDD of values.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
def wordCount(wordListRDD):
"""Creates a pair RDD with word counts from an RDD of words.
Args:
wordListRDD (RDD of str): An RDD consisting of words.
Returns:
RDD of (str, int): An RDD consisting of (word, count) tuples.
"""
return wordListRDD.map(lambda x : (x, 1)).reduceByKey(lambda x, y: x + y)
print wordCount(wordsRDD).collect()
# TEST wordCount function (4a)
Test.assertEquals(sorted(wordCount(wordsRDD).collect()),
[('cat', 2), ('elephant', 1), ('rat', 2)],
'incorrect definition for wordCount function')
"""
Explanation: Part 4: Apply word count to a file
In this section we will finish developing our word count application. We'll have to build the wordCount function, deal with real world problems like capitalization and punctuation, load in our data source, and compute the word count on the new data.
(4a) wordCount function
First, define a function for word counting. You should reuse the techniques that have been covered in earlier parts of this lab. This function should take in an RDD that is a list of words like wordsRDD and return a pair RDD that has all of the words and their associated counts.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
import re
def removePunctuation(text):
"""Removes punctuation, changes to lower case, and strips leading and trailing spaces.
Note:
Only spaces, letters, and numbers should be retained. Other characters should should be
eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after
punctuation is removed.
Args:
text (str): A string.
Returns:
str: The cleaned up string.
"""
mystr = re.sub(r'[^a-zA-Z0-9 ]*', '', text)
mystr = mystr.lower().strip()
return mystr
print removePunctuation('Hi, you!')
print removePunctuation(' No under_score!')
print removePunctuation(' * Remove punctuation then spaces * ')
# TEST Capitalization and punctuation (4b)
Test.assertEquals(removePunctuation(" The Elephant's 4 cats. "),
'the elephants 4 cats',
'incorrect definition for removePunctuation function')
"""
Explanation: (4b) Capitalization and punctuation
Real world files are more complicated than the data we have been using in this lab. Some of the issues we have to address are:
Words should be counted independent of their capitialization (e.g., Spark and spark should be counted as the same word).
All punctuation should be removed.
Any leading or trailing spaces on a line should be removed.
Define the function removePunctuation that converts all text to lower case, removes any punctuation, and removes leading and trailing spaces. Use the Python re module to remove any text that is not a letter, number, or space. Reading help(re.sub) might be useful.
If you are unfamiliar with regular expressions, you may want to review this tutorial from Google. Also, this website is a great resource for debugging your regular expression.
End of explanation
"""
# Just run this code
import os.path
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab1', 'shakespeare.txt')
fileName = os.path.join(baseDir, inputPath)
shakespeareRDD = (sc
.textFile(fileName, 8)
.map(removePunctuation))
print '\n'.join(shakespeareRDD
.zipWithIndex() # to (line, lineNum)
.map(lambda (l, num): '{0}: {1}'.format(num, l)) # to 'lineNum: line'
.take(15))
"""
Explanation: (4c) Load a text file
For the next part of this lab, we will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method. We also apply the recently defined removePunctuation() function using a map() transformation to strip out the punctuation and change all text to lowercase. Since the file is large we use take(15), so that we only print 15 lines.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
shakespeareWordsRDD = shakespeareRDD.flatMap(lambda x : x.split(" "))
shakespeareWordCount = shakespeareWordsRDD.count()
print shakespeareWordsRDD.top(5)
print shakespeareWordCount
# TEST Words from lines (4d)
# This test allows for leading spaces to be removed either before or after
# punctuation is removed.
Test.assertTrue(shakespeareWordCount == 927631 or shakespeareWordCount == 928908,
'incorrect value for shakespeareWordCount')
Test.assertEquals(shakespeareWordsRDD.top(5),
[u'zwaggerd', u'zounds', u'zounds', u'zounds', u'zounds'],
'incorrect value for shakespeareWordsRDD')
"""
Explanation: (4d) Words from lines
Before we can use the wordcount() function, we have to address two issues with the format of the RDD:
The first issue is that that we need to split each line by its spaces. Performed in (4d).
The second issue is we need to filter out empty lines. Performed in (4e).
Apply a transformation that will split each element of the RDD by its spaces. For each element of the RDD, you should apply Python's string split() function. You might think that a map() transformation is the way to do this, but think about what the result of the split() function will be. Note that you should not use the default implemenation of split(), but should instead pass in a separator value. For example, to split line by commas you would use line.split(',').
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
shakeWordsRDD = shakespeareWordsRDD.filter(lambda x : x != '')
shakeWordCount = shakeWordsRDD.count()
print shakeWordCount
# TEST Remove empty elements (4e)
Test.assertEquals(shakeWordCount, 882996, 'incorrect value for shakeWordCount')
"""
Explanation: (4e) Remove empty elements
The next step is to filter out the empty elements. Remove all entries where the word is ''.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
top15WordsAndCounts = wordCount(shakeWordsRDD).takeOrdered(15, key = lambda x: -x[1])
print '\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15WordsAndCounts))
# TEST Count the words (4f)
Test.assertEquals(top15WordsAndCounts,
[(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463),
(u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890),
(u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)],
'incorrect value for top15WordsAndCounts')
"""
Explanation: (4f) Count the words
We now have an RDD that is only words. Next, let's apply the wordCount() function to produce a list of word counts. We can view the top 15 words by using the takeOrdered() action; however, since the elements of the RDD are pairs, we need a custom sort function that sorts using the value part of the pair.
You'll notice that many of the words are common English words. These are called stopwords. In a later lab, we will see how to eliminate them from the results.
Use the wordCount() function and takeOrdered() to obtain the fifteen most common words and their counts.
End of explanation
"""
|
mirjalil/DataScience
|
bigdata-platforms/pyspark-get-started.ipynb
|
gpl-2.0
|
from pyspark import SparkContext
sc = SparkContext()
int_RDD = sc.parallelize(range(10), 3)
int_RDD
int_RDD.collect()
int_RDD.glom().collect()
"""
Explanation: PySpark for Data Analysis
Different ways of creating RDD
parallelize
read data from file
apply transformation to some existing RDDs
Basic Operations
End of explanation
"""
text = sc.textFile('file:///home/vahid/Github/DataScience/bigdata-platforms/data/31987-0.txt')
## read the first line
text.take(1)
"""
Explanation: Reading data from a text file
To read data from a local file, you need to specify the address by file://
textFile("file:///home/vahid/examplefile.txt")
But if the file is on HDFS, then we can specify the address by
textFile("/user/wordcount/input/examplefile.txt")
End of explanation
"""
text.take(3)
"""
Explanation: Take the first k elements (lines)
text.take(k)
End of explanation
"""
example = sc.textFile('data/example.txt')
# print the first line to make sure it's working
print(example.take(1))
def lower(line):
return(line.lower())
# apply lower() to each element:
example.map(lower).take(1)
def split(line):
return(line.split())
# apply split to each element, resulting in 0-more outputs --> flatMap
example.flatMap(split).take(5)
def create_keyval(word):
return(word, 1)
# Create key-value pairs for each split element --> map
example.flatMap(split).map(create_keyval).take(5)
def filterlen(word):
return(len(word)>5)
# filter split elements based on their character lengths
example.flatMap(split).filter(filterlen).collect()
"""
Explanation: Narrow Transformation
map: applies a function to each element of RDD.
flatMap: similar to map, except that here we can have 0 or more outputs for each element
filter: apply a boolean function to each element of RDD, resulting in filtering out based on that function
End of explanation
"""
pairs_RDD = example.flatMap(split).map(create_keyval)
for key,vals in pairs_RDD.groupByKey().take(5):
print(key, list(vals))
def sumvals(a, b):
return (a + b)
pairs_RDD.reduceByKey(sumvals).take(10)
"""
Explanation: Wide Transformation
groupByKey:
reduceByKey:
repartition
End of explanation
"""
het_RDD = sc.parallelize(
[['Alex', 23, 'CSE', 3.87],
['Bob', 24, 'ECE', 3.73],
['Max', 26, 'BCH', 3.44],
['Nikole', 25, 'CSE', 3.75],
['Jane', 22, 'ECE', 3.65],
['John', 22, 'BCH', 3.55]]
)
print(het_RDD.take(1))
het_RDD.collect()
"""
Explanation: Heterogenous Data Types
End of explanation
"""
def extract_dept_grade(row):
return(row[2], row[3])
## apply extract_dept_grade function to each element
dept_grade_RDD = het_RDD.map(extract_dept_grade)
print(dept_grade_RDD.collect())
## find the max. for each dept
dept_grade_RDD.reduceByKey(max).collect()
"""
Explanation: Find the max. grade for each department
End of explanation
"""
from pyspark import SparkContext
sc = SparkContext()
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.read.json('data/example-datafrae.json')
df.show()
df.printSchema()
df.select("name").show()
df.groupBy("major").count().show()
df.groupBy("major").mean("gpa").show()
df.groupBy("major").max("gpa").show()
"""
Explanation: Dataframes in Pyspark
End of explanation
"""
import os
import sys
# Configure the environment
if 'SPARK_HOME' not in os.environ:
home_folder = os.environ['HOME']
os.environ['SPARK_HOME'] = os.path.join(home_folder, 'apps/spark')
# Create a variable for our root path
SPARK_HOME = os.environ['SPARK_HOME']
# Add the PySpark/py4j to the Python Path
sys.path.insert(0, os.path.join(SPARK_HOME, "python", "build"))
sys.path.insert(0, os.path.join(SPARK_HOME, "python"))
"""
Explanation: Appendix: Installing Spark on Ubuntu
Dowload and extract the spark package from
tar xvfz spark-1.5.2-bin-hadoop2.6.tgz
sudo mv spark-1.5.2-bin-hadoop2.6 $HOME/apps/spark/
cd $HOME/apps/spark/
Now, we need to add the SPARK_HOME location to the PATH environment variable
export SPARK_HOME=$HOME/apps/spark
export PATH=$SPARK_HOME/bin:$PATH
Now, you can launch pyspark by pyspark
<img src='pyspark-launch.png' >
Reduce the verbosity level
By default, pyspark will generate lots of log messages when you run some command, and we can see how that can be a problem. To reduce the verbosity, copy the template file in the conf folder
cp $SPARK_HOME/conf/log4j.properties.template $SPARK_HOME/conf/log4j.properties
and edit it by replacing the INFO to WARN.
Using pyspark in iPython
In order to use pyspark in an iPython notebook, you need to configure it by adding a new file in the startup directory of ipython profile.
vim $HOME/.ipython/profile_default/startup/00-pyspark-setup.py
and add these contents in this file:
End of explanation
"""
print(SPARK_HOME)
from pyspark import SparkContext
sc = SparkContext( 'local', 'pyspark')
sc.parallelize(range(10), 3)
"""
Explanation: Now, you should be able to run ipython and use pyspark. Try running the following commands:
End of explanation
"""
|
amitkaps/machine-learning
|
cf_mba/notebook/1. Collaborative Filtering.ipynb
|
mit
|
#Import libraries
import pandas as pd
from scipy.spatial.distance import cosine
data = pd.read_csv("../data/groceries.csv")
data.head(100)
#Assume that for all items only one quantity was bought
"""
Explanation: Collaborative Filtering
Item Based: which takes similarities between items’ consumption histories
User Based: that considers similarities between user consumption histories and item similarities
End of explanation
"""
data["Quantity"] = 1
data.head()
len(pd.unique(data.item))
#This particular view isn't very helpful for us for analysis.
#This way of data being arranged is called LONG
#We need it in wide format
#Converting data from long to wide format
dataWide = data.pivot("Person", "item", "Quantity")
dataWide.head()
"""
Explanation: Exercise 1 Add a column to data : Quantity that has value 1
End of explanation
"""
dataWide[dataWide.index==2]
dataWide.iloc[1:2,:]
dataWide.loc[2,:]
"""
Explanation: Exercise 2
Print the data for Person number 2
End of explanation
"""
dataWide.iloc[1,:]
#Replace NA with 0
dataWide.fillna(0, inplace=True)
dataWide.head()
"""
Explanation: Exercise 3 Print the data for row number 2
End of explanation
"""
#Drop the Person column
data_ib = dataWide.copy()
data_ib.head()
data_ib = data_ib.reset_index()
data_ib.head()
#Drop the Person column
#data_ib = data_ib.iloc[:,1:]
data_ib = data_ib.drop("Person", axis=1)
data_ib.head()
# Create a placeholder dataframe listing item vs. item
data_ibs = pd.DataFrame(index=data_ib.columns,
columns=data_ib.columns)
data_ibs.head()
"""
Explanation: Item-based Collaborative Filtering
In item based collaborative filtering we do not care about the user column
End of explanation
"""
for i in range(0,len(data_ibs.columns)) :
# Loop through the columns for each column
for j in range(0,len(data_ibs.columns)) :
# Fill in placeholder with cosine similarities
data_ibs.ix[i,j] = 1-cosine(data_ib.ix[:,i],data_ib.ix[:,j])
data_ibs.head()
"""
Explanation: Similarity Measure
We will now find similarities.
We will use cosine similarity
<img src="img/cosine.png" >
The resulting similarity ranges from −1 meaning exactly opposite, to 1 meaning exactly the same, with 0 indicating orthogonality (decorrelation), and in-between values indicating intermediate similarity or dissimilarity.
src https://en.wikipedia.org/wiki/Cosine_similarity
In essense the cosine similarity takes the sum product of the first and second column, then divides that by the product of the square root of the sum of squares of each column.
End of explanation
"""
data_neighbours = pd.DataFrame(index=data_ibs.columns,columns=range(1,4))
# Loop through our similarity dataframe and fill in neighbouring item names
for i in range(0,len(data_ibs.columns)):
data_neighbours.ix[i,:3] = data_ibs.ix[0:,i].sort_values(ascending=False)[:3].index
data_neighbours
"""
Explanation: With our similarity matrix filled out we can look for each items “neighbour” by looping through ‘data_ibs’, sorting each column in descending order, and grabbing the name of each of the top 3 products.
End of explanation
"""
data_neighbours = pd.DataFrame(index=data_ibs.columns,columns=range(1,11))
# Loop through our similarity dataframe and fill in neighbouring item names
for i in range(0,len(data_ibs.columns)):
data_neighbours.ix[i,:10] = data_ibs.ix[0:,i].sort_values(ascending=False)[:10].index
data_neighbours
"""
Explanation: Exercise 4 Modify the above code to print the top 10 similar products for each product
End of explanation
"""
#Helper function to get similarity scores
def getScore(history, similarities):
return sum(history*similarities)/sum(similarities)
#Understand what this function does !
data_sims1 = dataWide.reset_index()
data_sims1.head()
# Create a place holder matrix for similarities, and fill in the user name column
data_sims = pd.DataFrame(index=data_sims1.index,columns=data_sims1.columns)
data_sims.ix[:,:1] = data_sims1.ix[:,:1]
#This is the same as our original data but with nothing filled in except the headers
data_sims.head()
data_sims12 = data_sims1.iloc[:500,:]
data_sims11 = data_sims.iloc[:500,:]
for i in range(0,len(data_sims11.index)):
for j in range(1,len(data_sims11.columns)):
user = data_sims11.index[i]
product = data_sims11.columns[j]
if data_sims12.ix[i][j] == 1:
data_sims11.ix[i][j] = 0
else:
product_top_names = data_neighbours.ix[product][1:10]
product_top_sims = data_ibs.ix[product].sort_values(ascending=False)[1:10]
user_purchases = data_ib.ix[user,product_top_names]
data_sims11.ix[i][j] = getScore(user_purchases,product_top_sims)
print i
# Get the top products
data_recommend = pd.DataFrame(index=data_sims.index, columns=['Person','1','2','3','4','5','6'])
data_recommend.ix[0:,0] = data_sims.ix[:,0]
# Instead of top product scores, we want to see names
for i in range(0,len(data_sims.index)):
data_recommend.ix[i,1:] = data_sims.ix[i,:].sort_values(ascending=False).ix[1:7,].index.transpose()
# Print a sample
data_recommend.ix[:10,:4]
"""
Explanation: User Based collaborative Filtering
The process for creating a User Based recommendation system is as follows:
Have Item-Based similarity matrix
Check which items the user has consumed
For each item the user has consumed, get the top X neighbours
Get the consumption record of the user for each neighbour.
Compute similarity score
Recommend the items with the highest score
End of explanation
"""
|
philmui/datascience2016fall
|
lecture02.ingestion/lecture02.ingestion.ipynb
|
mit
|
from __future__ import print_function
import csv
my_reader = csv.DictReader(open('data/eu_revolving_loans.csv', 'r'))
"""
Explanation: Lecture 01 : intro, inputs, numpy, pandas
1. Inputs: CSV / Text
We will start by ingesting plain text.
End of explanation
"""
for line in my_reader:
print(line)
"""
Explanation: DicReader returns a "generator" -- which means that we only have 1 chance to read the returning row dictionaries.
Let's just print out line by line to see what we are reading in:
End of explanation
"""
import pandas as pd
df = pd.read_csv('data/eu_revolving_loans.csv')
df.head()
"""
Explanation: Since the data is tabular format, pandas is ideally suited for such data. There are convenient pandas import functions for reading in tabular data.
Pandas provides direct csv ingestion into "data frames":
End of explanation
"""
df = pd.read_csv('data/eu_revolving_loans.csv', header=[1,2,4], index_col=0)
df.head()
"""
Explanation: As we briefly discussed last week, simply reading in without any configuration generates a fairly message data frame. We should try to specify some helping hints to pandas as to where the header rows are and which is the index colum:
End of explanation
"""
from __future__ import print_function
from openpyxl import load_workbook
"""
Explanation: 2. Inputs: Excel
Many organizations still use Excel as the common medium for communicating data and analysis. We will look quickly at how to ingest Excel data. There are many packages available to read Excel files. We will use one popular one here.
End of explanation
"""
!open 'data/climate_change_download_0.xlsx'
"""
Explanation: Let's take a look at the excel file that want to read into Jupyter
End of explanation
"""
wb = load_workbook(filename='data/climate_change_download_0.xlsx')
"""
Explanation: Here is how we can read the Excel file into the Jupyter environment.
End of explanation
"""
wb.get_sheet_names()`
"""
Explanation: What are the "sheets" in this workbook?
End of explanation
"""
ws = wb.get_sheet_by_name('Data')
"""
Explanation: We will focus on the sheet 'Data':
End of explanation
"""
for row in ws.rows:
for cell in row:
print(cell.value)
"""
Explanation: For the sheet "Data", let's print out the content cell-by-cell to view the content.
End of explanation
"""
import pandas as pd
df = pd.read_excel('data/climate_change_download_0.xlsx')
df.head()
"""
Explanation: Pandas also provides direct Excel data ingest:
End of explanation
"""
df = pd.read_excel('data/GHE_DALY_Global_2000_2012.xls', sheetname='Global2012', header=[4,5])
"""
Explanation: Here is another example with multiple sheets:
End of explanation
"""
df.columns
"""
Explanation: This dataframe has a "multi-level" index:
End of explanation
"""
df.to_excel('data/my_excel.xlsx')
!open 'data/my_excel.xlsx'
"""
Explanation: How do we export a dataframe back to Excel?
End of explanation
"""
import pdftables
my_pdf = open('data/WEF_GlobalCompetitivenessReport_2014-15.pdf', 'rb')
chart_page = pdftables.get_pdf_page(my_pdf, 29)
"""
Explanation: 3. Inputs: PDF
PDF is also a common communication medium about data and analysis. Let's look at how one can read data from PDF into Python.
End of explanation
"""
table = pdftables.page_to_tables(chart_page)
titles = zip(table[0][0], table[0][1])[:5]
titles = [''.join([title[0], title[1]]) for title in titles]
print(titles)
"""
Explanation: PDF is a proprietary file format with specific tagging that has been reverse engineered. Let's take a look at some structures in this file.
End of explanation
"""
all_rows = []
for row_data in table[0][2:]:
all_rows.extend([row_data[:5], row_data[5:]])
print(all_rows)
"""
Explanation: There is a table with structured data that we can peel out:
End of explanation
"""
from ConfigParser import ConfigParser
config = ConfigParser()
config.read('../cfg/sample.cfg')
config.sections()
"""
Explanation: 4. Configurations
End of explanation
"""
import tweepy
auth = tweepy.OAuthHandler(config.get('twitter', 'consumer_key'), config.get('twitter', 'consumer_secret'))
auth.set_access_token(config.get('twitter','access_token'), config.get('twitter','access_token_secret'))
auth
"""
Explanation: 5. APIs
Getting Twitter data from API
Relevant links to the exercise here:
Twitter Streaming: https://dev/twitter.com/streaming/overview
API client: https://github.com/tweepy/tweepy
Twitter app: https://apps.twitter.com
Create an authentication handler
End of explanation
"""
api = tweepy.API(auth)
"""
Explanation: Create an API endpoint
End of explanation
"""
python_tweets = api.search('turkey')
for tweet in python_tweets:
print(tweet.text)
"""
Explanation: Try REST-ful API call to Twitter
End of explanation
"""
from pprint import pprint
import requests
weather_key = config.get('openweathermap', 'api_key')
res = requests.get("http://api.openweathermap.org/data/2.5/weather",
params={"q": "San Francisco", "appid": weather_key, "units": "metric"})
pprint(res.json())
"""
Explanation: For streaming API call, we should run a standalone python program: tweetering.py
Input & Output to OpenWeatherMap API
Relevant links to the exercise here:
http://openweathermap.org/
http://openweathermap.org/current
API call:
```
api.openweathermap.org/data/2.5/weather?q={city name}
api.openweathermap.org/data/2.5/weather?q={city name},{country code}
```
Parameters:
q city name and country code divided by comma, use ISO 3166 country codes
Examples of API calls:
```
api.openweathermap.org/data/2.5/weather?q=London
api.openweathermap.org/data/2.5/weather?q=London,uk
```
End of explanation
"""
import requests
"""
Explanation: 6. Python requests
"requests" is a wonderful HTTP library for Python, with the right level of abstraction to avoid lots of tedious plumbing (manually add query strings to your URLs, or to form-encode your POST data). Keep-alive and HTTP connection pooling are 100% automatic, powered by urllib3, which is embedded within Requests)
```
r = requests.get('https://api.github.com/user', auth=('user', 'pass'))
r.status_code
200
r.headers['content-type']
'application/json; charset=utf8'
r.encoding
'utf-8'
r.text
u'{"type":"User"...'
r.json()
{u'private_gists': 419, u'total_private_repos': 77, ...}
```
There is a lot of great documentation at the python-requests site -- we are extracting selected highlights from there for your convenience here.
Making a request
Making a request with Requests is very simple.
Begin by importing the Requests module:
End of explanation
"""
r = requests.get('https://api.github.com/events')
"""
Explanation: Now, let's try to get a webpage. For this example, let's get GitHub's public timeline
End of explanation
"""
r = requests.post('http://httpbin.org/post', data = {'key':'value'})
"""
Explanation: Now, we have a Response object called r. We can get all the information we need from this object.
Requests' simple API means that all forms of HTTP request are as obvious. For example, this is how you make an HTTP POST request:
End of explanation
"""
r = requests.put('http://httpbin.org/put', data = {'key':'value'})
r = requests.delete('http://httpbin.org/delete')
r = requests.head('http://httpbin.org/get')
r = requests.options('http://httpbin.org/get')
"""
Explanation: What about the other HTTP request types: PUT, DELETE, HEAD and OPTIONS? These are all just as simple:
End of explanation
"""
payload = {'key1': 'value1', 'key2': 'value2'}
r = requests.get('http://httpbin.org/get', params=payload)
"""
Explanation: Passing Parameters In URLs
You often want to send some sort of data in the URL's query string. If you were constructing the URL by hand, this data would be given as key/value pairs in the URL after a question mark, e.g. httpbin.org/get?key=val. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. As an example, if you wanted to pass key1=value1 and key2=value2 to httpbin.org/get, you would use the following code:
End of explanation
"""
print(r.url)
"""
Explanation: You can see that the URL has been correctly encoded by printing the URL:
End of explanation
"""
payload = {'key1': 'value1', 'key2': ['value2', 'value3']}
r = requests.get('http://httpbin.org/get', params=payload)
print(r.url)
"""
Explanation: Note that any dictionary key whose value is None will not be added to the URL's query string.
You can also pass a list of items as a value:
End of explanation
"""
import requests
r = requests.get('https://api.github.com/events')
r.text
"""
Explanation: Response Content
We can read the content of the server's response. Consider the GitHub timeline again:
End of explanation
"""
r.encoding
r.encoding = 'ISO-8859-1'
"""
Explanation: Requests will automatically decode content from the server. Most unicode charsets are seamlessly decoded.
When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers. The text encoding guessed by Requests is used when you access r.text. You can find out what encoding Requests is using, and change it, using the r.encoding property:
End of explanation
"""
import requests
r = requests.get('https://api.github.com/events')
r.json()
"""
Explanation: If you change the encoding, Requests will use the new value of r.encoding whenever you call r.text. You might want to do this in any situation where you can apply special logic to work out what the encoding of the content will be. For example, HTTP and XML have the ability to specify their encoding in their body. In situations like this, you should use r.content to find the encoding, and then set r.encoding. This will let you use r.text with the correct encoding.
Requests will also use custom encodings in the event that you need them. If you have created your own encoding and registered it with the codecs module, you can simply use the codec name as the value of r.encoding and Requests will handle the decoding for you.
JSON Response Content
There's also a builtin JSON decoder, in case you're dealing with JSON data:
End of explanation
"""
r.status_code
"""
Explanation: In case the JSON decoding fails, r.json raises an exception. For example, if the response gets a 204 (No Content), or if the response contains invalid JSON, attempting r.json raises ValueError: No JSON object could be decoded.
It should be noted that the success of the call to r.json does not indicate the success of the response. Some servers may return a JSON object in a failed response (e.g. error details with HTTP 500). Such JSON will be decoded and returned. To check that a request is successful, use r.raise_for_status() or check r.status_code is what you expect.
End of explanation
"""
url = 'https://api.github.com/some/endpoint'
headers = {'user-agent': 'my-app/0.0.1'}
r = requests.get(url, headers=headers)
"""
Explanation: Custom Headers
If you'd like to add HTTP headers to a request, simply pass in a dict to the headers parameter.
For example, we didn't specify our user-agent in the previous example:
End of explanation
"""
r.headers
"""
Explanation: Note: Custom headers are given less precedence than more specific sources of information. For instance:
Authorization headers set with headers= will be overridden if credentials are specified in .netrc, which in turn will be overridden by the auth= parameter.
Authorization headers will be removed if you get redirected off-host.
Proxy-Authorization headers will be overridden by proxy credentials provided in the URL.
Content-Length headers will be overridden when we can determine the length of the content.
Response Headers
We can view the server's response headers using a Python dictionary:
End of explanation
"""
r.headers['Content-Type']
r.headers.get('content-type')
"""
Explanation: The dictionary is special, though: it's made just for HTTP headers. According to RFC 7230, HTTP Header names are case-insensitive.
So, we can access the headers using any capitalization we want:
End of explanation
"""
url = 'http://www.cnn.com'
r = requests.get(url)
print(r.cookies.items())
"""
Explanation: Cookies
If a response contains some Cookies, you can quickly access them:
End of explanation
"""
url = 'http://httpbin.org/cookies'
cookies = dict(cookies_are='working')
r = requests.get(url, cookies=cookies)
r.text
"""
Explanation: To send your own cookies to the server, you can use the cookies parameter:
End of explanation
"""
r = requests.get('http://github.com')
r.url
r.status_code
r.history
"""
Explanation: Redirection and History
By default Requests will perform location redirection for all verbs except HEAD.
We can use the history property of the Response object to track redirection.
The Response.history list contains the Response objects that were created in order to complete the request. The list is sorted from the oldest to the most recent response.
For example, GitHub redirects all HTTP requests to HTTPS:
End of explanation
"""
r = requests.get('http://github.com', allow_redirects=False)
r.status_code
r.history
"""
Explanation: If you're using GET, OPTIONS, POST, PUT, PATCH or DELETE, you can disable redirection handling with the allow_redirects parameter:
End of explanation
"""
r = requests.head('http://github.com', allow_redirects=True)
r.url
r.history
"""
Explanation: If you're using HEAD, you can enable redirection as well:
End of explanation
"""
requests.get('http://github.com', timeout=1)
"""
Explanation: Timeouts
You can tell Requests to stop waiting for a response after a given number of seconds with the timeout parameter:
End of explanation
"""
|
nate-d-olson/micro_rm_dev
|
dev/.ipynb_checkpoints/notebook_2014_12_12-checkpoint.ipynb
|
gpl-2.0
|
%%bash
java -Xmx4G -jar ../utilities/pilon-1.10.jar \
--genome ../data/RM8375/ref/CFSAN008157.HGAP.fasta \
--frags ../analysis/bioinf/sequence_purity/mapping/SRR1555296.bam \
--changes --vcf --tracks \
--fix "all" --debug #note --fix "all" default
"""
Explanation: Testing out Pilon
Objective: Test out using Pilon on the MiSeq and PGM data
Approach: Running Pilon using bash commands, evaluate revise reference relative to HGAP and GenBank LT2
Testing single paried end dataset
End of explanation
"""
%%bash
python ../analysis/bioinf/sequence_purity/run_bwa_mem_pe.py \
../analysis/bioinf/sequence_purity/bwa_mem_pipeline_params.txt
"""
Explanation: Error with head, needed to rerun bwa with revised params file
Error with memory allocation, not sure if too many applicaitons running on laptop, or docker memory restrictions
* re-running after closing windows and restarting finder (hung up on moving files to trash)
* Need to look into changing amount of memory allocated to ipython notebook
End of explanation
"""
%%bash
java -Xmx8G -jar ../utilities/pilon-1.10.jar \
--genome ../data/RM8375/ref/CFSAN008157.HGAP.fasta \
--unpaired ../data/RM8375/PGM/bam/IonXpress_001_R_2014_03_23_18_22_09_user_SN2-17-8375_Orthogonal_Measurement_1_Run_2_PacBioRef2.bam \
--changes --vcf --tracks \
--fix "all" --debug #note --fix "all" default
"""
Explanation: Testing pilon on PGM data
End of explanation
"""
|
ljo/collatex-tutorial
|
unit5/CollateX and XML, Part 2.ipynb
|
gpl-3.0
|
from collatex import *
from lxml import etree
import json,re
"""
Explanation: CollateX and XML, Part 2
David J. Birnbaum (djbpitt@gmail.com, http://www.obdurodon.org), 2015-06-29
This example collates a single line of XML from four witnesses. In Part 1 we spelled out the details step by step in a way that would not be used in a real project, but that made it easy to see how each step moves toward the final result. In Part 2 we employ three classes (WitnessSet, Line, Word) to make the code more extensible and adaptable.
The sample input is still a single line for four witnesses, given as strings within the Python script. This time, though, the witness identifier (siglum) is given as an attribute on the XML input line.
Load libraries. Unchanged from Part 1.
End of explanation
"""
class WitnessSet:
def __init__(self,witnessList):
self.witnessList = witnessList
def generate_json_input(self):
json_input = {}
witnesses = []
json_input['witnesses'] = witnesses
for witness in self.witnessList:
line = Line(witness)
witnessData = {}
witnessData['id'] = line.siglum()
witnessTokens = {}
witnessData['tokens'] = line.tokens()
witnesses.append(witnessData)
return json_input
"""
Explanation: The WitnessSet class represents all of the witnesses being collated. The generate_json_input() method returns a JSON object that is suitable for input into CollateX.
At the moment each witness contains just one line (<l> element), so the entire witness is treated as a line. In future parts of this tutorial, the lines will be processed individually, segmenting the collation task into subtasks that collate just one line at a time.
End of explanation
"""
class Line:
addWMilestones = etree.XML("""
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" indent="no" encoding="UTF-8" omit-xml-declaration="yes"/>
<xsl:template match="*|@*">
<xsl:copy>
<xsl:apply-templates select="node() | @*"/>
</xsl:copy>
</xsl:template>
<xsl:template match="/*">
<xsl:copy>
<xsl:apply-templates select="@*"/>
<!-- insert a <w/> milestone before the first word -->
<w/>
<xsl:apply-templates/>
</xsl:copy>
</xsl:template>
<!-- convert <add>, <sic>, and <crease> to milestones (and leave them that way)
CUSTOMIZE HERE: add other elements that may span multiple word tokens
-->
<xsl:template match="add | sic | crease ">
<xsl:element name="{name()}">
<xsl:attribute name="n">start</xsl:attribute>
</xsl:element>
<xsl:apply-templates/>
<xsl:element name="{name()}">
<xsl:attribute name="n">end</xsl:attribute>
</xsl:element>
</xsl:template>
<xsl:template match="note"/>
<xsl:template match="text()">
<xsl:call-template name="whiteSpace">
<xsl:with-param name="input" select="translate(.,'
',' ')"/>
</xsl:call-template>
</xsl:template>
<xsl:template name="whiteSpace">
<xsl:param name="input"/>
<xsl:choose>
<xsl:when test="not(contains($input, ' '))">
<xsl:value-of select="$input"/>
</xsl:when>
<xsl:when test="starts-with($input,' ')">
<xsl:call-template name="whiteSpace">
<xsl:with-param name="input" select="substring($input,2)"/>
</xsl:call-template>
</xsl:when>
<xsl:otherwise>
<xsl:value-of select="substring-before($input, ' ')"/>
<w/>
<xsl:call-template name="whiteSpace">
<xsl:with-param name="input" select="substring-after($input,' ')"/>
</xsl:call-template>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>
""")
transformAddW = etree.XSLT(addWMilestones)
xsltWrapW = etree.XML('''
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="xml" indent="no" omit-xml-declaration="yes"/>
<xsl:template match="/*">
<xsl:copy>
<xsl:apply-templates select="w"/>
</xsl:copy>
</xsl:template>
<xsl:template match="w">
<!-- faking <xsl:for-each-group> as well as the "<<" and except" operators -->
<xsl:variable name="tooFar" select="following-sibling::w[1] | following-sibling::w[1]/following::node()"/>
<w>
<xsl:copy-of select="following-sibling::node()[count(. | $tooFar) != count($tooFar)]"/>
</w>
</xsl:template>
</xsl:stylesheet>
''')
transformWrapW = etree.XSLT(xsltWrapW)
def __init__(self,line):
self.line = line
def siglum(self):
return str(etree.XML(self.line).xpath('/l/@wit')[0])
def tokens(self):
return [Word(token).createToken() for token in Line.transformWrapW(Line.transformAddW(etree.XML(self.line))).xpath('//w')]
"""
Explanation: The Line class contains methods applied to individual lines (note that each witness in this part of the tutorial consists of only a single line). The XSLT stylesheets and the functions to use them have been moved into the Line class, since they apply to individual lines. The siglum() method returns the manuscript identifier and the tokens() method returns a list of JSON objects, one for each word token.
With a witness that contained more than one line, the siglum would be a property of the witness and the tokens would be a property of each line of the witness. In this part of the tutorial, since each witness has only one line, the siglum is recorded as an attribute of the line, rather than of an XML ancestor that contains all of the lines of the witness.
End of explanation
"""
class Word:
unwrapRegex = re.compile('<w>(.*)</w>')
stripTagsRegex = re.compile('<.*?>')
def __init__(self,word):
self.word = word
def unwrap(self):
return Word.unwrapRegex.match(etree.tostring(self.word,encoding='unicode')).group(1)
def normalize(self):
return Word.stripTagsRegex.sub('',self.unwrap().lower())
def createToken(self):
token = {}
token['t'] = self.unwrap()
token['n'] = self.normalize()
return token
"""
Explanation: The Word class contains methods that apply to individual words. unwrap() and normalize() are private; they are used by createToken() to return a JSON object with the "t" and "n" properties for a word token.
End of explanation
"""
A = """<l wit='A'><abbrev>Et</abbrev>cil i partent seulement</l>"""
B = """<l wit='B'><abbrev>Et</abbrev>cil i p<abbrev>er</abbrev>dent ausem<abbrev>en</abbrev>t</l>"""
C = """<l wit='C'><abbrev>Et</abbrev>cil i p<abbrev>ar</abbrev>tent seulema<abbrev>n</abbrev>t</l>"""
D = """<l wit='D'>E cil i partent sulement</l>"""
witnessSet = WitnessSet([A,B,C,D])
"""
Explanation: Create XML data and assign to a witnessSet variable
End of explanation
"""
json_input = witnessSet.generate_json_input()
print(json_input)
"""
Explanation: Generate JSON from the data and examine it
End of explanation
"""
collationText = collate_pretokenized_json(json_input,output='table',layout='vertical')
print(collationText)
collationJSON = collate_pretokenized_json(json_input,output='json')
print(collationJSON)
collationHTML2 = collate_pretokenized_json(json_input,output='html2')
"""
Explanation: Collate and output the results as a plain-text alignment table, as JSON, and as colored HTML
End of explanation
"""
|
batfish/pybatfish
|
docs/source/notebooks/filters.ipynb
|
apache-2.0
|
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
"""
Explanation: Access-lists and firewall rules
This category of questions allows you to analyze the behavior of access
control lists and firewall rules. It also allows you to comprehensively
validate (aka verification) that some traffic is or is not allowed.
Filter Line Reachability
Search Filters
Test Filters
Find Matching Filter Lines
End of explanation
"""
result = bf.q.filterLineReachability().answer().frame()
"""
Explanation: Filter Line Reachability
Returns unreachable lines in filters (ACLs and firewall rules).
Finds all lines in the specified filters that will not match any packet, either because of being shadowed by prior lines or because of its match condition being empty.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Examine filters on nodes matching this specifier. | NodeSpec | True |
filters | Specifier for filters to test. | FilterSpec | True |
ignoreComposites | Whether to ignore filters that are composed of multiple filters defined in the configs. | bool | True | False
Invocation
End of explanation
"""
result.head(5)
"""
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Sources | Filter sources | List of str
Unreachable_Line | Filter line that cannot be matched (i.e., unreachable) | str
Unreachable_Line_Action | Action performed by the unreachable line (e.g., PERMIT or DENY) | str
Blocking_Lines | Lines that, when combined, cover the unreachable line | List of str
Different_Action | Whether unreachable line has an action different from the blocking line(s) | bool
Reason | The reason a line is unreachable | str
Additional_Info | Additional information | str
Print the first 5 rows of the returned Dataframe
End of explanation
"""
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('filters')
"""
Explanation: Print the first row of the returned Dataframe
End of explanation
"""
result = bf.q.searchFilters(headers=HeaderConstraints(srcIps='10.10.10.0/24', dstIps='218.8.104.58', applications = ['dns']), action='deny', filters='acl_in').answer().frame()
"""
Explanation: Search Filters
Finds flows for which a filter takes a particular behavior.
This question searches for flows for which a filter (access control list) has a particular behavior. The behaviors can be: that the filter permits the flow (permit), that it denies the flow (deny), or that the flow is matched by a particular line (matchLine <lineNumber>). Filters are selected using node and filter specifiers, which might match multiple filters. In this case, a (possibly different) flow will be found for each filter.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Only evaluate filters present on nodes matching this specifier. | NodeSpec | True |
filters | Only evaluate filters that match this specifier. | FilterSpec | True |
headers | Packet header constraints on the flows being searched. | HeaderConstraints | True |
action | The behavior that you want evaluated. Specify exactly one of permit, deny, or matchLine <line number>. | str | True |
startLocation | Only consider specified locations as possible sources. | LocationSpec | True |
invertSearch | Search for packet headers outside the specified headerspace, rather than inside the space. | bool | True |
Invocation
End of explanation
"""
result.head(5)
"""
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Node | Node | str
Filter_Name | Filter name | str
Flow | Evaluated flow | Flow
Action | Outcome | str
Line_Content | Line content | str
Trace | ACL trace | List of TraceTree
Print the first 5 rows of the returned Dataframe
End of explanation
"""
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('filters')
"""
Explanation: Print the first row of the returned Dataframe
End of explanation
"""
result = bf.q.testFilters(headers=HeaderConstraints(srcIps='10.10.10.1', dstIps='218.8.104.58', applications = ['dns']), nodes='rtr-with-acl', filters='acl_in').answer().frame()
"""
Explanation: Test Filters
Returns how a flow is processed by a filter (ACLs, firewall rules).
Shows how the specified flow is processed through the specified filters, returning its permit/deny status as well as the line(s) it matched.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Only examine filters on nodes matching this specifier. | NodeSpec | True |
filters | Only consider filters that match this specifier. | FilterSpec | True |
headers | Packet header constraints. | HeaderConstraints | False |
startLocation | Location to start tracing from. | LocationSpec | True |
Invocation
End of explanation
"""
result.head(5)
"""
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Node | Node | str
Filter_Name | Filter name | str
Flow | Evaluated flow | Flow
Action | Outcome | str
Line_Content | Line content | str
Trace | ACL trace | List of TraceTree
Print the first 5 rows of the returned Dataframe
End of explanation
"""
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
"""
Explanation: Print the first row of the returned Dataframe
End of explanation
"""
result = bf.q.findMatchingFilterLines(headers=HeaderConstraints(applications='DNS')).answer().frame()
"""
Explanation: Find Matching Filter Lines
Returns lines in filters (ACLs and firewall rules) that match any packet within the specified header constraints.
Finds all lines in the specified filters that match any packet within the specified header constraints.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Examine filters on nodes matching this specifier. | NodeSpec | True |
filters | Specifier for filters to check. | FilterSpec | True |
headers | Packet header constraints for which to find matching filter lines. | HeaderConstraints | True |
action | Show filter lines with this action. By default returns lines with either action. | str | True |
ignoreComposites | Whether to ignore filters that are composed of multiple filters defined in the configs. | bool | True | False
Invocation
End of explanation
"""
result.head(5)
"""
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Node | Node | str
Filter | Filter name | str
Line | Line text | str
Line_Index | Index of line | int
Action | Action performed by the line (e.g., PERMIT or DENY) | str
Print the first 5 rows of the returned Dataframe
End of explanation
"""
result.iloc[0]
"""
Explanation: Print the first row of the returned Dataframe
End of explanation
"""
|
LSSTDESC/Monitor
|
examples/depth_curve_example.ipynb
|
bsd-3-clause
|
import desc.monitor
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
"""
Explanation: Measuring 5-sigma Depth Curves
In this notebook we will extract an object light curve from the Twinkles field, and measure the 5-sigma limiting depth at each epoch. The reason to do this is to start trying to understand the error properties of the Monitor light curves (including their biases) as a function of observation properties, such as image quality and image depth.
Requirements
You will need the DESC Monitor and its dependencies.
You will also need to set up an SSH tunnel to NERSC scidb where the Twinkles PServ data is stored. Follow directions here and use the following code from the command line.
ssh -L 3307:scidb1.nersc.gov:3306 $USER@cori.nersc.gov
End of explanation
"""
dbConn = desc.monitor.DBInterface(database='DESC_Twinkles_Level_2',
#if running from ssh-tunnel uncomment below
#host='127.0.0.1', port=3307,
#or if running jupyter-dev uncomment below
host='scidb1.nersc.gov', port=3306,
driver='mysql')
lc = desc.monitor.LightCurve(dbConn)
lc.build_lightcurve_from_db(objid=48253)
fig = lc.visualize_lightcurve()
"""
Explanation: An Example Object Light Curve
Let's pull out one of the Twinkles objects and visualize it.
End of explanation
"""
worker = desc.monitor.Monitor(dbConn)
worker.get_lightcurves([48253])
worker.return_lightcurve[48253].visualize_lightcurve(using='flux')
plt.show()
"""
Explanation: The Matching Depth Curve
Now let's measure the 5-sigma limiting depth (for a point source). We do this by selecting a number of stars from the field, and then for each epoch, querying their flux errors, converting to limiting depth, and then averaging (with sigma-clipping) over the ensemble. All this is done by the measure_depth_curve() method.
End of explanation
"""
dc = worker.measure_depth_curve(using='DM_modified')
fig = worker.return_lightcurve[48253].visualize_lightcurve(using='mag')
dc.visualize_lightcurve(using='mag', include_errors=False, use_existing_fig=fig)
plt.show()
sc = worker.measure_seeing_curve()
fig = sc.visualize_seeing_curve()
plt.show()
"""
Explanation: We have three methods for calculating visit depth: DM, DM_modified, or stars. DM and DM_modified take the sky_noise and seeing values from the visit table in the NERSC database to do the calculations. stars uses the flux errors in the stars observed in the Twinkles visits to make the calculation. More information is available in our error model notebook also in this folder.
End of explanation
"""
|
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
|
doc/notebooks/automaton.is_coaccessible.ipynb
|
gpl-3.0
|
import vcsn
"""
Explanation: automaton.is_coaccessible
Whether all its states are coaccessible, i.e., its transposed automaton is accessible, in other words, all its states cab reach a final state.
Preconditions:
- None
See also:
- automaton.coaccessible
- automaton.is_accessible
- automaton.trim
Examples
End of explanation
"""
%%automaton a
context = "lal_char(abc), b"
$ -> 0
0 -> 1 a
1 -> $
2 -> 0 a
1 -> 3 a
a.is_coaccessible()
"""
Explanation: State 3 of the following automaton cannot reach a final state.
End of explanation
"""
a.coaccessible()
a.coaccessible().is_coaccessible()
"""
Explanation: Calling accessible returns a copy of the automaton without non-accessible states:
End of explanation
"""
|
ccwang002/play_aiohttp
|
1_demo.ipynb
|
mit
|
@asyncio.coroutine
def quote_simple(url='http://localhost:5566/quote/uniform', slow=False):
r = yield from aiohttp.request(
'GET', url, params={'slow': True} if slow else {}
)
if r.status != 200:
logger.error('Unsuccessful response [Status: %s (%d)]'
% (r.reason, r.status))
r.close(force=True)
return None
quote_json = yield from r.json()
return quote_json['quote']
loop = asyncio.get_event_loop()
"""
Explanation: Basic
End of explanation
"""
coro = quote_simple()
quote = loop.run_until_complete(coro)
quote
"""
Explanation: To run a simple asyncio corountine.
End of explanation
"""
task = asyncio.Task(quote_simple())
quote = loop.run_until_complete(task)
quote
"""
Explanation: Internally asyncio wraps it with [asyncio.Task].
So the following works equivalently.
End of explanation
"""
type(coro), type(task)
"""
Explanation: However, coro is corountine, and task is Task (subclass of [Future]).
One can use asyncio.ensure_future to make sure having a Future obj returned.
End of explanation
"""
quote = loop.run_until_complete(
quote_simple(url='http://localhost:5566/quote/uniform?part=100')
)
"""
Explanation: Passing wrong URL gives error
End of explanation
"""
@asyncio.coroutine
def quote_many_naive(num_quotes=1):
coroutines = [
quote_simple(slow=True) for i in range(num_quotes)
]
quotes = yield from (asyncio.gather(*coroutines))
return quotes
%%time
quotes = loop.run_until_complete(quote_many_naive(2000))
"""
Explanation: Multiple Concurrent Requests
End of explanation
"""
@asyncio.coroutine
def quote(conn, url='http://localhost:5566/quote/uniform', slow=False):
r = yield from aiohttp.request(
'GET', url, params={'slow': True} if slow else {},
connector=conn
)
if r.status != 200:
logger.error('Unsuccessful response [Status: %s (%d)]'
% (r.reason, r.status))
r.close(force=True)
return None
quote_json = yield from r.json()
r.close(force=True)
return quote_json['quote']
@asyncio.coroutine
def quote_many(num_quotes=1, conn_limit=20):
conn = aiohttp.TCPConnector(keepalive_timeout=1, force_close=True, limit=conn_limit)
coroutines = [
quote(conn) for i in range(num_quotes)
]
quotes = yield from (asyncio.gather(*coroutines))
return quotes
%%time
quotes = loop.run_until_complete(quote_many(2000, conn_limit=100))
"""
Explanation: This is not helping since we open 2000 connections at a time. It is slower than expected.
Limiting connection pool size
Ref on official site.
End of explanation
"""
def quote_with_lock(semaphore, url='http://localhost:5566/quote/uniform'):
with (yield from semaphore):
r = yield from aiohttp.request('GET', url)
if r.status != 200:
logger.error('Unsuccessful response [Status: %s (%d)]'
% (r.reason, r.status))
r.close(force=True)
return None
quote_json = yield from r.json()
r.close(force=True)
return quote_json['quote']
@asyncio.coroutine
def quote_many(num_quotes=1, conn_limit=20):
semaphore = asyncio.Semaphore(conn_limit)
coroutines = [
quote_with_lock(semaphore) for i in range(num_quotes)
]
quotes = yield from (asyncio.gather(*coroutines))
return quotes
%%time
quotes = loop.run_until_complete(quote_many(2000, conn_limit=100))
"""
Explanation: I don't know why, but using its internal connection limit is slow. But we can implement one ourselves.
Custom connection limit using semaphore
Use [asyncio.Semaphore] acting as a lock.
End of explanation
"""
@asyncio.coroutine
def quote_many(num_quotes=1, conn_limit=20, progress=None, step=10):
if progress is None:
progress = widgets.IntProgress()
progress.max = num_quotes // step
ipydisplay(progress)
semaphore = asyncio.Semaphore(conn_limit)
coroutines = [
quote_with_lock(semaphore) for i in range(num_quotes)
]
# quotes = yield from (asyncio.gather(*coroutines))
quotes = []
for ith, coro in enumerate(asyncio.as_completed(coroutines), 1):
if ith % step == 0:
progress.value += 1
q = yield from coro
quotes.append(q)
return quotes
%%time
quotes = loop.run_until_complete(quote_many(2000, conn_limit=100, step=1))
"""
Explanation: Add Progressbar
If you don't care the original of coroutines
End of explanation
"""
%%time
quotes = loop.run_until_complete(quote_many(2000, conn_limit=100, step=20))
"""
Explanation: For fast response, progress bar introduces considerable latency. Try modify the step higher.
End of explanation
"""
@asyncio.coroutine
def quote_many(num_quotes=1, conn_limit=20, progress=None, step=10):
if progress is None:
progress = widgets.IntProgress()
progress.max = num_quotes // step
ipydisplay(progress)
# create the lock
semaphore = asyncio.Semaphore(conn_limit)
finished_task_count = 0
def progress_adder(fut):
nonlocal finished_task_count
finished_task_count += 1
if finished_task_count % step == 0:
progress.value += 1
# wrap coroutines as Tasks
futures = []
for i in range(num_quotes):
task = asyncio.Task(quote_with_lock(semaphore))
task.add_done_callback(progress_adder)
futures.append(task)
quotes = yield from (asyncio.gather(*futures))
return quotes
%%time
quotes = loop.run_until_complete(quote_many(2000, conn_limit=100, step=1))
%%time
quotes = loop.run_until_complete(quote_many(2000, conn_limit=100, step=20))
"""
Explanation: Original order matters
... go eat yourself.
End of explanation
"""
@asyncio.coroutine
def quote_many(num_quotes=1, conn_limit=20, progress=None, step=10):
if progress is None:
progress = widgets.IntProgress()
progress.max = num_quotes // step
ipydisplay(progress)
semaphore = asyncio.Semaphore(conn_limit)
# wrap coroutines with future
# For Python 3.4.4+, asyncio.ensure_future(...)
# will wrap coro as Task and keep input the same
# if it is already Future.
futures = [
asyncio.ensure_future(quote_with_lock(semaphore))
for i in range(num_quotes)
]
for ith, coro in enumerate(asyncio.as_completed(futures), 1):
if ith % step == 0:
progress.value += 1
yield from coro
quotes = [fut.result() for fut in futures]
return quotes
%%time
quotes = loop.run_until_complete(quote_many(2000, conn_limit=100, step=20))
"""
Explanation: Alternative way
End of explanation
"""
|
mattilyra/gensim
|
docs/notebooks/WMD_tutorial.ipynb
|
lgpl-2.1
|
from time import time
start_nb = time()
# Initialize logging.
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s')
sentence_obama = 'Obama speaks to the media in Illinois'
sentence_president = 'The president greets the press in Chicago'
sentence_obama = sentence_obama.lower().split()
sentence_president = sentence_president.lower().split()
"""
Explanation: Finding similar documents with Word2Vec and WMD
Word Mover's Distance is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. For example, in a blog post OpenTable use WMD on restaurant reviews. Using this approach, they are able to mine different aspects of the reviews. In part 2 of this tutorial, we show how you can use Gensim's WmdSimilarity to do something similar to what OpenTable did. In part 1 shows how you can compute the WMD distance between two documents using wmdistance. Part 1 is optional if you want use WmdSimilarity, but is also useful in it's own merit.
First, however, we go through the basics of what WMD is.
Word Mover's Distance basics
WMD is a method that allows us to assess the "distance" between two documents in a meaningful way, even when they have no words in common. It uses word2vec [4] vector embeddings of words. It been shown to outperform many of the state-of-the-art methods in k-nearest neighbors classification [3].
WMD is illustrated below for two very similar sentences (illustration taken from Vlad Niculae's blog). The sentences have no words in common, but by matching the relevant words, WMD is able to accurately measure the (dis)similarity between the two sentences. The method also uses the bag-of-words representation of the documents (simply put, the word's frequencies in the documents), noted as $d$ in the figure below. The intution behind the method is that we find the minimum "traveling distance" between documents, in other words the most efficient way to "move" the distribution of document 1 to the distribution of document 2.
<img src='https://vene.ro/images/wmd-obama.png' height='600' width='600'>
This method was introduced in the article "From Word Embeddings To Document Distances" by Matt Kusner et al. (link to PDF). It is inspired by the "Earth Mover's Distance", and employs a solver of the "transportation problem".
In this tutorial, we will learn how to use Gensim's WMD functionality, which consists of the wmdistance method for distance computation, and the WmdSimilarity class for corpus based similarity queries.
Note:
If you use this software, please consider citing [1], [2] and [3].
Running this notebook
You can download this iPython Notebook, and run it on your own computer, provided you have installed Gensim, PyEMD, NLTK, and downloaded the necessary data.
The notebook was run on an Ubuntu machine with an Intel core i7-4770 CPU 3.40GHz (8 cores) and 32 GB memory. Running the entire notebook on this machine takes about 3 minutes.
Part 1: Computing the Word Mover's Distance
To use WMD, we need some word embeddings first of all. You could train a word2vec (see tutorial here) model on some corpus, but we will start by downloading some pre-trained word2vec embeddings. Download the GoogleNews-vectors-negative300.bin.gz embeddings here (warning: 1.5 GB, file is not needed for part 2). Training your own embeddings can be beneficial, but to simplify this tutorial, we will be using pre-trained embeddings at first.
Let's take some sentences to compute the distance between.
End of explanation
"""
# Import and download stopwords from NLTK.
from nltk.corpus import stopwords
from nltk import download
download('stopwords') # Download stopwords list.
# Remove stopwords.
stop_words = stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stop_words]
sentence_president = [w for w in sentence_president if w not in stop_words]
"""
Explanation: These sentences have very similar content, and as such the WMD should be low. Before we compute the WMD, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences.
End of explanation
"""
start = time()
import os
from gensim.models import KeyedVectors
if not os.path.exists('/data/w2v_googlenews/GoogleNews-vectors-negative300.bin.gz'):
raise ValueError("SKIP: You need to download the google news model")
model = KeyedVectors.load_word2vec_format('/data/w2v_googlenews/GoogleNews-vectors-negative300.bin.gz', binary=True)
print('Cell took %.2f seconds to run.' % (time() - start))
"""
Explanation: Now, as mentioned earlier, we will be using some downloaded pre-trained embeddings. We load these into a Gensim Word2Vec model class. Note that the embeddings we have chosen here require a lot of memory.
End of explanation
"""
distance = model.wmdistance(sentence_obama, sentence_president)
print 'distance = %.4f' % distance
"""
Explanation: So let's compute WMD using the wmdistance method.
End of explanation
"""
sentence_orange = 'Oranges are my favorite fruit'
sentence_orange = sentence_orange.lower().split()
sentence_orange = [w for w in sentence_orange if w not in stop_words]
distance = model.wmdistance(sentence_obama, sentence_orange)
print 'distance = %.4f' % distance
"""
Explanation: Let's try the same thing with two completely unrelated sentences. Notice that the distance is larger.
End of explanation
"""
# Normalizing word2vec vectors.
start = time()
model.init_sims(replace=True) # Normalizes the vectors in the word2vec class.
distance = model.wmdistance(sentence_obama, sentence_president) # Compute WMD as normal.
print 'Cell took %.2f seconds to run.' %(time() - start)
"""
Explanation: Normalizing word2vec vectors
When using the wmdistance method, it is beneficial to normalize the word2vec vectors first, so they all have equal length. To do this, simply call model.init_sims(replace=True) and Gensim will take care of that for you.
Usually, one measures the distance between two word2vec vectors using the cosine distance (see cosine similarity), which measures the angle between vectors. WMD, on the other hand, uses the Euclidean distance. The Euclidean distance between two vectors might be large because their lengths differ, but the cosine distance is small because the angle between them is small; we can mitigate some of this by normalizing the vectors.
Note that normalizing the vectors can take some time, especially if you have a large vocabulary and/or large vectors.
Usage is illustrated in the example below. It just so happens that the vectors we have downloaded are already normalized, so it won't do any difference in this case.
End of explanation
"""
# Pre-processing a document.
from nltk import word_tokenize
download('punkt') # Download data for tokenizer.
def preprocess(doc):
doc = doc.lower() # Lower the text.
doc = word_tokenize(doc) # Split into words.
doc = [w for w in doc if not w in stop_words] # Remove stopwords.
doc = [w for w in doc if w.isalpha()] # Remove numbers and punctuation.
return doc
start = time()
import json
from smart_open import smart_open
# Business IDs of the restaurants.
ids = ['4bEjOyTaDG24SY5TxsaUNQ', '2e2e7WgqU1BnpxmQL5jbfw', 'zt1TpTuJ6y9n551sw9TaEg',
'Xhg93cMdemu5pAMkDoEdtQ', 'sIyHTizqAiGu12XMLX3N3g', 'YNQgak-ZLtYJQxlDwN-qIg']
w2v_corpus = [] # Documents to train word2vec on (all 6 restaurants).
wmd_corpus = [] # Documents to run queries against (only one restaurant).
documents = [] # wmd_corpus, with no pre-processing (so we can see the original documents).
with smart_open('/data/yelp_academic_dataset_review.json', 'rb') as data_file:
for line in data_file:
json_line = json.loads(line)
if json_line['business_id'] not in ids:
# Not one of the 6 restaurants.
continue
# Pre-process document.
text = json_line['text'] # Extract text from JSON object.
text = preprocess(text)
# Add to corpus for training Word2Vec.
w2v_corpus.append(text)
if json_line['business_id'] == ids[0]:
# Add to corpus for similarity queries.
wmd_corpus.append(text)
documents.append(json_line['text'])
print 'Cell took %.2f seconds to run.' %(time() - start)
"""
Explanation: Part 2: Similarity queries using WmdSimilarity
You can use WMD to get the most similar documents to a query, using the WmdSimilarity class. Its interface is similar to what is described in the Similarity Queries Gensim tutorial.
Important note:
WMD is a measure of distance. The similarities in WmdSimilarity are simply the negative distance. Be careful not to confuse distances and similarities. Two similar documents will have a high similarity score and a small distance; two very different documents will have low similarity score, and a large distance.
Yelp data
Let's try similarity queries using some real world data. For that we'll be using Yelp reviews, available at http://www.yelp.com/dataset_challenge. Specifically, we will be using reviews of a single restaurant, namely the Mon Ami Gabi.
To get the Yelp data, you need to register by name and email address. The data is 775 MB.
This time around, we are going to train the Word2Vec embeddings on the data ourselves. One restaurant is not enough to train Word2Vec properly, so we use 6 restaurants for that, but only run queries against one of them. In addition to the Mon Ami Gabi, mentioned above, we will be using:
Earl of Sandwich.
Wicked Spoon.
Serendipity 3.
Bacchanal Buffet.
The Buffet.
The restaurants we chose were those with the highest number of reviews in the Yelp dataset. Incidentally, they all are on the Las Vegas Boulevard. The corpus we trained Word2Vec on has 18957 documents (reviews), and the corpus we used for WmdSimilarity has 4137 documents.
Below a JSON file with Yelp reviews is read line by line, the text is extracted, tokenized, and stopwords and punctuation are removed.
End of explanation
"""
from matplotlib import pyplot as plt
%matplotlib inline
# Document lengths.
lens = [len(doc) for doc in wmd_corpus]
# Plot.
plt.rc('figure', figsize=(8,6))
plt.rc('font', size=14)
plt.rc('lines', linewidth=2)
plt.rc('axes', color_cycle=('#377eb8','#e41a1c','#4daf4a',
'#984ea3','#ff7f00','#ffff33'))
# Histogram.
plt.hist(lens, bins=20)
plt.hold(True)
# Average length.
avg_len = sum(lens) / float(len(lens))
plt.axvline(avg_len, color='#e41a1c')
plt.hold(False)
plt.title('Histogram of document lengths.')
plt.xlabel('Length')
plt.text(100, 800, 'mean = %.2f' % avg_len)
plt.show()
"""
Explanation: Below is a plot with a histogram of document lengths and includes the average document length as well. Note that these are the pre-processed documents, meaning stopwords are removed, punctuation is removed, etc. Document lengths have a high impact on the running time of WMD, so when comparing running times with this experiment, the number of documents in query corpus (about 4000) and the length of the documents (about 62 words on average) should be taken into account.
End of explanation
"""
# Train Word2Vec on all the restaurants.
model = Word2Vec(w2v_corpus, workers=3, size=100)
# Initialize WmdSimilarity.
from gensim.similarities import WmdSimilarity
num_best = 10
instance = WmdSimilarity(wmd_corpus, model, num_best=10)
"""
Explanation: Now we want to initialize the similarity class with a corpus and a word2vec model (which provides the embeddings and the wmdistance method itself).
End of explanation
"""
start = time()
sent = 'Very good, you should seat outdoor.'
query = preprocess(sent)
sims = instance[query] # A query is simply a "look-up" in the similarity class.
print 'Cell took %.2f seconds to run.' %(time() - start)
"""
Explanation: The num_best parameter decides how many results the queries return. Now let's try making a query. The output is a list of indeces and similarities of documents in the corpus, sorted by similarity.
Note that the output format is slightly different when num_best is None (i.e. not assigned). In this case, you get an array of similarities, corresponding to each of the documents in the corpus.
The query below is taken directly from one of the reviews in the corpus. Let's see if there are other reviews that are similar to this one.
End of explanation
"""
# Print the query and the retrieved documents, together with their similarities.
print 'Query:'
print sent
for i in range(num_best):
print
print 'sim = %.4f' % sims[i][1]
print documents[sims[i][0]]
"""
Explanation: The query and the most similar documents, together with the similarities, are printed below. We see that the retrieved documents are discussing the same thing as the query, although using different words. The query talks about getting a seat "outdoor", while the results talk about sitting "outside", and one of them says the restaurant has a "nice view".
End of explanation
"""
start = time()
sent = 'I felt that the prices were extremely reasonable for the Strip'
query = preprocess(sent)
sims = instance[query] # A query is simply a "look-up" in the similarity class.
print 'Query:'
print sent
for i in range(num_best):
print
print 'sim = %.4f' % sims[i][1]
print documents[sims[i][0]]
print '\nCell took %.2f seconds to run.' %(time() - start)
"""
Explanation: Let's try a different query, also taken directly from one of the reviews in the corpus.
End of explanation
"""
print 'Notebook took %.2f seconds to run.' %(time() - start_nb)
"""
Explanation: This time around, the results are more straight forward; the retrieved documents basically contain the same words as the query.
WmdSimilarity normalizes the word embeddings by default (using init_sims(), as explained before), but you can overwrite this behaviour by calling WmdSimilarity with normalize_w2v_and_replace=False.
End of explanation
"""
|
regardscitoyens/consultation_an
|
exploitation/analyse_quanti_theme5.ipynb
|
agpl-3.0
|
def loadContributions(file, withsexe=False):
contributions = pd.read_json(path_or_buf=file, orient="columns")
rows = [];
rindex = [];
for i in range(0, contributions.shape[0]):
row = {};
row['id'] = contributions['id'][i]
rindex.append(contributions['id'][i])
if (withsexe):
if (contributions['sexe'][i] == 'Homme'):
row['sexe'] = 0
else:
row['sexe'] = 1
for question in contributions['questions'][i]:
if (question.get('Reponse')) and question['texte'][0:5] != 'Conna' and question['titreQuestion'][-2:] != '34':
row[question['titreQuestion']+' : '+question['texte']] = 1
for criteres in question.get('Reponse'):
# print(criteres['critere'].keys())
row[question['titreQuestion']+'. (Réponse) '+question['texte']+' -> '+str(criteres['critere'].get('texte'))] = 1
rows.append(row)
df = pd.DataFrame(data=rows)
df.fillna(0, inplace=True)
return df
df = loadContributions('../data/EGALITE5.brut.json', True)
df.fillna(0, inplace=True)
df.index = df['id']
#df.to_csv('consultation_an.csv', format='%d')
#df.columns = ['Q_' + str(col+1) for col in range(len(df.columns) - 2)] + ['id' , 'sexe']
df.head()
"""
Explanation: Reading the data
End of explanation
"""
from sklearn.cluster import KMeans
from sklearn import metrics
import numpy as np
X = df.drop('id', axis=1).values
def train_kmeans(nb_clusters, X):
kmeans = KMeans(n_clusters=nb_clusters, random_state=0).fit(X)
return kmeans
#print(kmeans.predict(X))
#kmeans.cluster_centers_
def select_nb_clusters():
perfs = {};
for nbclust in range(2,10):
kmeans_model = train_kmeans(nbclust, X);
labels = kmeans_model.labels_
# from http://scikit-learn.org/stable/modules/clustering.html#calinski-harabaz-index
# we are in an unsupervised model. cannot get better!
# perfs[nbclust] = metrics.calinski_harabaz_score(X, labels);
perfs[nbclust] = metrics.silhouette_score(X, labels);
print(perfs);
return perfs;
df['clusterindex'] = train_kmeans(4, X).predict(X)
#df
perfs = select_nb_clusters();
# result :
# {2: 341.07570462155348, 3: 227.39963334619881, 4: 186.90438345452918, 5: 151.03979976346525, 6: 129.11214073405731, 7: 112.37235520885432, 8: 102.35994869157568, 9: 93.848315820675438}
optimal_nb_clusters = max(perfs, key=perfs.get);
print("optimal_nb_clusters" , optimal_nb_clusters);
"""
Explanation: Build clustering model
Here we build a kmeans model , and select the "optimal" of clusters.
Here we see that the optimal number of clusters is 2.
End of explanation
"""
km_model = train_kmeans(optimal_nb_clusters, X);
df['clusterindex'] = km_model.predict(X)
lGroupBy = df.groupby(['clusterindex']).mean();
cluster_profile_counts = df.groupby(['clusterindex']).count();
cluster_profile_means = df.groupby(['clusterindex']).mean();
global_counts = df.count()
global_means = df.mean()
cluster_profile_counts.head(10)
df_profiles = pd.DataFrame();
nbclusters = cluster_profile_means.shape[0]
df_profiles['clusterindex'] = range(nbclusters)
for col in cluster_profile_means.columns:
if(col != "clusterindex"):
df_profiles[col] = np.zeros(nbclusters)
for cluster in range(nbclusters):
df_profiles[col][cluster] = cluster_profile_means[col][cluster]
# row.append(df[col].mean());
df_profiles.head()
#print(df_profiles.columns)
intereseting_columns = {};
for col in df_profiles.columns:
if(col != "clusterindex"):
global_mean = df[col].mean()
diff_means_global = abs(df_profiles[col] - global_mean). max();
# print(col , diff_means_global)
if(diff_means_global > 0.05):
intereseting_columns[col] = True
#print(intereseting_columns)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Build the optimal model and apply it
End of explanation
"""
interesting = list(intereseting_columns.keys())
df_profiles_sorted = df_profiles[interesting].sort_index(axis=1)
df_profiles_sorted.plot.bar(figsize =(1, 1))
df_profiles_sorted.plot.bar(figsize =(16, 8), legend=False)
df_profiles_sorted.T
#df_profiles.sort_index(axis=1).T
"""
Explanation: Cluster Profiles
Here, the optimal model ihas two clusters , cluster 0 with 399 cases, and 1 with 537 cases.
As this model is based on binary inputs. Given this, the best description of the clusters is by the distribution of zeros and ones of each input (question).
The figure below gives the cluster profiles of this model. Cluster 0 on the left. 1 on the right. The questions invloved as different (highest bars)
End of explanation
"""
|
tanghaibao/goatools
|
notebooks/background_genes_ncbi.ipynb
|
bsd-2-clause
|
from goatools.cli.ncbi_gene_results_to_python import ncbi_tsv_to_py
ncbi_tsv = 'gene_result.txt'
output_py = 'genes_ncbi_10090_proteincoding.py'
ncbi_tsv_to_py(ncbi_tsv, output_py)
"""
Explanation: How to download background genes from NCBI
Example
1) Download mouse (TaxID=10090) protein-coding genes
Query NCBI Gene:
"10090"[Taxonomy ID] AND alive[property] AND genetype protein coding[Properties]
Click "Send to:"
Select "File"
Select "Create File" button
The default name of the tsv file is gene_result.txt
Note: To download all mouse DNA items:
"10090"[Taxonomy ID] AND alive[property]
2) Convert NCBI Gene tab separated values (tsv) file to a Python module
Use the command line or a Python script to convert a NCBI Gene tsv file to a Python module
2a) Run a script from the command line
$ scripts/ncbi_gene_results_to_python.py gene_result.txt -o genes_ncbi_10090_proteincoding.py
26,386 lines READ: gene_result.txt
26,376 geneids WROTE: genes_ncbi_10090_proteincoding.py
2b) Run a function from inside your Python script
End of explanation
"""
from genes_ncbi_10090_proteincoding import GENEID2NT
"""
Explanation: 3) Explore NCBI gene data
3a) Import NCBI data from new NCBI gene Python module
End of explanation
"""
# Get the data for one gene
nt_gene = next(iter(sorted(GENEID2NT.values())))
# Print the field name and value for all fields for one gene
for key, val in sorted(nt_gene._asdict().items()):
print('{:15} {}'.format(key, val))
"""
Explanation: 3b) Examine fields stored in a namedtuple for a gene
End of explanation
"""
nts = [nt for nt in GENEID2NT.values() if nt.start_position_on_the_genomic_accession != '']
nts = sorted(nts, key=lambda nt: nt.GeneID)
print('{N:,} genes have specific genomic basepair locations'.format(N=len(nts)))
"""
Explanation: 3c) Get genes which have specific genomic locations
End of explanation
"""
print('GeneID Symbol Description')
print('------ ------- --------------------------------------------------------')
for nt_gene in nts[:20]:
print('{GeneID:6} {Symbol:8} {description}'.format(**nt_gene._asdict()))
"""
Explanation: 3d) Print GeneID, Symbol, and description of some genes
End of explanation
"""
sym2nt = {nt.Symbol:nt for nt in nts}
print('{N:,} gene symbols'.format(N=len(sym2nt)))
assert len(nts) == len(sym2nt)
"""
Explanation: 3e) Create a symbol2nt dict
End of explanation
"""
# Choose a specific gene
symbol = 'Ace'
# Print NCBI information for the chosen gene
for field, value in sorted(sym2nt[symbol]._asdict().items()):
print('{FLD:15} {VAL:}'.format(FLD=field, VAL=value))
"""
Explanation: 3f) Print NCBI information for a specific gene
End of explanation
"""
|
mcamack/Jupyter-Notebooks
|
tensorflow/tensorflow101.ipynb
|
apache-2.0
|
import tensorflow as tf
"""
Explanation: TensorFlow
References:
* TensorFlow Getting Started
* Tensor Ranks, Shapes, and Types
Overview
TensorFlow has multiple APIs:
* TensorFlow Core: lowest level, complete control, fine tuning capabilities
* Higher Level APIs: easier to learn, abstracted. (example: tf.estimator helps manage data sets, estimators, training, and inference)
Tensors
The Tensor is the central unit of data consisting of a set of values shaped into an array of any number of dimensions (rank)
TensorFlow Core
End of explanation
"""
node1 = tf.constant(3.0, dtype=tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
print(node1, node2)
"""
Explanation: Computational Graph
TensorFlow programs consist of 2 discrete sections:
1. Building the computational graph
2. Running the computational graph
The computational graph is a series of TF operations arranged into a graph of nodes. Each node takes zero or more tensors as inputs and produces a tensor as an output.
Constants are a type of node which takes no inputs and will output an internally stored value. Values are initialized with tf.constant and can never change. Printing the nodes gives tensor node metadata, not the values of the nodes.
End of explanation
"""
sess = tf.Session()
print(sess.run([node1, node2]))
"""
Explanation: Session
To actually evaluate nodes, the computational graph must be run in a session. The session encapsulates the control and state of the TensorFlow runtime. Below, we create a Session object and invoke its run method to evaluate node1 and node2.
End of explanation
"""
node3 = tf.add(node1, node2)
print(node3)
print(sess.run(node3))
"""
Explanation: More complicated computations can be performed by combining Tensor nodes with Operation nodes. Use the tf.add node to mathematically add node1 and node2:
End of explanation
"""
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b # + provides a shortcut for tf.add(a, b)
print(sess.run(adder_node, {a: 3, b: 4.5}))
print(sess.run(adder_node, {a: [1, 3], b: [2, 4]}))
"""
Explanation: Placeholders
TensorBoard will show an uninteresting, static graph at this point. By adding external inputs (Placeholders) a dynamic value can be added later:
End of explanation
"""
add_and_triple = adder_node * 3.
print(sess.run(add_and_triple, {a: 3, b: 4.5}))
"""
Explanation: Make the graph more complex by adding another operation:
End of explanation
"""
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b
"""
Explanation: Variables
To make the model trainable, add Variables for trainable parameters:
End of explanation
"""
init = tf.global_variables_initializer()
sess.run(init)
"""
Explanation: Initialize all variables with a special operation. Until this point, they are uninitialized:
End of explanation
"""
Sessionprint(sess.run(linear_model, {x: [1, 2, 3, 4]}))
"""
Explanation: Because x is a placeholder, linear_model can be evaluated for several x values simultaneously:
End of explanation
"""
y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
"""
Explanation: Loss Function
The loss function measures how far apart the current model is from the actual data. Use a sum of squared error function to see how far off 'y' is from what is produced from 'linear_model=W * x + b' run with x=[1,2,3,4]
End of explanation
"""
fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])
print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
"""
Explanation: The values of W and b need to be updated in order to get a perfect fit. We can manually figure out what they should be in order to get the right y output (with a loss of 0):
End of explanation
"""
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init) # reset values to incorrect defaults.
for i in range(1000):
sess.run(train, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]})
print(sess.run([W, b]))
"""
Explanation: tf.train API
TensorFlow optimizers will modify each variable to automatically minimize the loss function. Gradient descent is the most simple optimizer. It modifies each variable by the magnitude of the derivative of less w.r.t. that variable
End of explanation
"""
# Model parameters
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
# loss
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x: x_train, y: y_train})
# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
"""
Explanation: The above values are the final model parameters which minimize the loss function!
Complete Program
Everything done above is compiled below:
End of explanation
"""
g = tf.Graph()
with g.as_default():
a = tf.placeholder(tf.float32, name="node1")
b = tf.placeholder(tf.float32, name="node2")
c = a + b
tf.summary.FileWriter("logs", g).close()
#from this notebook's directory run > tensorboard --logdir=logs
#then open TensorBoard at: http://localhost:6006/#graphs
"""
Explanation: TensorBoard Graphs
The following produces a simple TensorBoard graph. It must be run from the containing directory and then can be viewed at the local web browser address below
* Reference: Viewing TensorFlow Graphs in Jupyter Notebooks
End of explanation
"""
|
YuriyGuts/kaggle-quora-question-pairs
|
notebooks/feature-oofp-nn-mlp-with-magic.ipynb
|
mit
|
from pygoose import *
import gc
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import *
from keras import backend as K
from keras.models import Model, Sequential
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import EarlyStopping, ModelCheckpoint
"""
Explanation: Feature: Out-Of-Fold Predictions from a Multi-Layer Perceptron (+Magic Inputs)
In addition to the MLP architecture, we'll append some of the leaky features to the intermediate feature layer.
<img src="assets/mlp-with-magic.png" alt="Network Architecture" style="height: 900px;" />
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
"""
project = kg.Project.discover()
"""
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
"""
feature_list_id = 'oofp_nn_mlp_with_magic'
"""
Explanation: Identifier for storing these features on disk and referring to them later.
End of explanation
"""
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
"""
Explanation: Make subsequent NN runs reproducible.
End of explanation
"""
embedding_matrix = kg.io.load(project.aux_dir + 'fasttext_vocab_embedding_matrix.pickle')
"""
Explanation: Read data
Word embedding lookup matrix.
End of explanation
"""
X_train_q1 = kg.io.load(project.preprocessed_data_dir + 'sequences_q1_fasttext_train.pickle')
X_train_q2 = kg.io.load(project.preprocessed_data_dir + 'sequences_q2_fasttext_train.pickle')
X_test_q1 = kg.io.load(project.preprocessed_data_dir + 'sequences_q1_fasttext_test.pickle')
X_test_q2 = kg.io.load(project.preprocessed_data_dir + 'sequences_q2_fasttext_test.pickle')
y_train = kg.io.load(project.features_dir + 'y_train.pickle')
"""
Explanation: Padded sequences of word indices for every question.
End of explanation
"""
magic_feature_lists = [
'magic_frequencies',
'magic_cooccurrence_matrix',
]
X_train_magic, X_test_magic, _ = project.load_feature_lists(magic_feature_lists)
X_train_magic = X_train_magic.values
X_test_magic = X_test_magic.values
scaler = StandardScaler()
scaler.fit(np.vstack([X_train_magic, X_test_magic]))
X_train_magic = scaler.transform(X_train_magic)
X_test_magic = scaler.transform(X_test_magic)
"""
Explanation: Magic features.
End of explanation
"""
EMBEDDING_DIM = embedding_matrix.shape[-1]
VOCAB_LENGTH = embedding_matrix.shape[0]
MAX_SEQUENCE_LENGTH = X_train_q1.shape[-1]
print(EMBEDDING_DIM, VOCAB_LENGTH, MAX_SEQUENCE_LENGTH)
"""
Explanation: Word embedding properties.
End of explanation
"""
def create_model_question_branch():
input_q = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedding_q = Embedding(
VOCAB_LENGTH,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False,
)(input_q)
timedist_q = TimeDistributed(Dense(
EMBEDDING_DIM,
activation='relu',
))(embedding_q)
lambda_q = Lambda(
lambda x: K.max(x, axis=1),
output_shape=(EMBEDDING_DIM, )
)(timedist_q)
output_q = lambda_q
return input_q, output_q
def create_model(params):
q1_input, q1_output = create_model_question_branch()
q2_input, q2_output = create_model_question_branch()
magic_input = Input(shape=(X_train_magic.shape[-1], ))
merged_inputs = concatenate([q1_output, q2_output, magic_input])
dense_1 = Dense(params['num_dense_1'])(merged_inputs)
bn_1 = BatchNormalization()(dense_1)
relu_1 = Activation('relu')(bn_1)
dense_2 = Dense(params['num_dense_2'])(relu_1)
bn_2 = BatchNormalization()(dense_2)
relu_2 = Activation('relu')(bn_2)
dense_3 = Dense(params['num_dense_3'])(relu_2)
bn_3 = BatchNormalization()(dense_3)
relu_3 = Activation('relu')(bn_3)
dense_4 = Dense(params['num_dense_4'])(relu_3)
bn_4 = BatchNormalization()(dense_4)
relu_4 = Activation('relu')(bn_4)
bn_final = BatchNormalization()(relu_4)
output = Dense(1, activation='sigmoid')(bn_final)
model = Model(
inputs=[q1_input, q2_input, magic_input],
outputs=output,
)
model.compile(
loss='binary_crossentropy',
optimizer=Adam(lr=0.01),
metrics=['accuracy']
)
return model
def predict(model, X_q1, X_q2, X_magic):
"""
Mirror the pairs, compute two separate predictions, and average them.
"""
y1 = model.predict([X_q1, X_q2, X_magic], batch_size=1024, verbose=1).reshape(-1)
y2 = model.predict([X_q2, X_q1, X_magic], batch_size=1024, verbose=1).reshape(-1)
return (y1 + y2) / 2
"""
Explanation: Define models
End of explanation
"""
NUM_FOLDS = 5
kfold = StratifiedKFold(
n_splits=NUM_FOLDS,
shuffle=True,
random_state=RANDOM_SEED
)
"""
Explanation: Partition the data
End of explanation
"""
y_train_oofp = np.zeros_like(y_train, dtype='float64')
y_test_oofp = np.zeros((len(X_test_q1), NUM_FOLDS))
"""
Explanation: Create placeholders for out-of-fold predictions.
End of explanation
"""
BATCH_SIZE = 2048
MAX_EPOCHS = 200
model_params = {
'num_dense_1': 400,
'num_dense_2': 200,
'num_dense_3': 400,
'num_dense_4': 100,
}
"""
Explanation: Define hyperparameters
End of explanation
"""
model_checkpoint_path = project.temp_dir + 'fold-checkpoint-' + feature_list_id + '.h5'
"""
Explanation: The path where the best weights of the current model will be saved.
End of explanation
"""
%%time
# Iterate through folds.
for fold_num, (ix_train, ix_val) in enumerate(kfold.split(X_train_q1, y_train)):
# Augment the training set by mirroring the pairs.
X_fold_train_q1 = np.vstack([X_train_q1[ix_train], X_train_q2[ix_train]])
X_fold_train_q2 = np.vstack([X_train_q2[ix_train], X_train_q1[ix_train]])
X_fold_train_magic = np.vstack([X_train_magic[ix_train], X_train_magic[ix_train]])
X_fold_val_q1 = np.vstack([X_train_q1[ix_val], X_train_q2[ix_val]])
X_fold_val_q2 = np.vstack([X_train_q2[ix_val], X_train_q1[ix_val]])
X_fold_val_magic = np.vstack([X_train_magic[ix_val], X_train_magic[ix_val]])
# Ground truth should also be "mirrored".
y_fold_train = np.concatenate([y_train[ix_train], y_train[ix_train]])
y_fold_val = np.concatenate([y_train[ix_val], y_train[ix_val]])
print()
print(f'Fitting fold {fold_num + 1} of {kfold.n_splits}')
print()
# Compile a new model.
model = create_model(model_params)
# Train.
model.fit(
[X_fold_train_q1, X_fold_train_q2, X_fold_train_magic], y_fold_train,
validation_data=([X_fold_val_q1, X_fold_val_q2, X_fold_val_magic], y_fold_val),
batch_size=BATCH_SIZE,
epochs=MAX_EPOCHS,
verbose=1,
callbacks=[
# Stop training when the validation loss stops improving.
EarlyStopping(
monitor='val_loss',
min_delta=0.001,
patience=3,
verbose=1,
mode='auto',
),
# Save the weights of the best epoch.
ModelCheckpoint(
model_checkpoint_path,
monitor='val_loss',
save_best_only=True,
verbose=2,
),
],
)
# Restore the best epoch.
model.load_weights(model_checkpoint_path)
# Compute out-of-fold predictions.
y_train_oofp[ix_val] = predict(model, X_train_q1[ix_val], X_train_q2[ix_val], X_train_magic[ix_val])
y_test_oofp[:, fold_num] = predict(model, X_test_q1, X_test_q2, X_test_magic)
# Clear GPU memory.
K.clear_session()
del X_fold_train_q1, X_fold_train_q2, X_fold_train_magic
del X_fold_val_q1, X_fold_val_q2, X_fold_val_magic
del model
gc.collect()
cv_score = log_loss(y_train, y_train_oofp)
print('CV score:', cv_score)
"""
Explanation: Fit the folds and compute out-of-fold predictions
End of explanation
"""
feature_names = [feature_list_id]
features_train = y_train_oofp.reshape((-1, 1))
features_test = np.mean(y_test_oofp, axis=1).reshape((-1, 1))
project.save_features(features_train, features_test, feature_names, feature_list_id)
"""
Explanation: Save features
End of explanation
"""
pd.DataFrame(features_test).plot.hist()
"""
Explanation: Explore
End of explanation
"""
|
jmhummel/IMDb-predictive-analytics
|
src/notebook.ipynb
|
mit
|
# data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
from scipy.stats import truncnorm
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
"""
Explanation: IMDB Predictive Analytics
This notebook explores using data science techniques on a data set of 5000+ movies, and predicting whether a movie will be highly rated on IMDb.
The objective of this notebook is to follow a step-by-step workflow, explaining each step and rationale for every decision we take during solution development.
This notebook is adapted from "Titanic Data Science Solutions" by Manav Sehgal
Workflow stages
This workflow goes through seven stages.
Question or problem definition.
Acquire training and testing data.
Wrangle, prepare, cleanse the data.
Analyze, identify patterns, and explore the data.
Model, predict and solve the problem.
Visualize, report, and present the problem solving steps and final solution.
Supply or submit the results.
The workflow indicates general sequence of how each stage may follow the other. However there are use cases with exceptions.
We may combine mulitple workflow stages. We may analyze by visualizing data.
Perform a stage earlier than indicated. We may analyze data before and after wrangling.
Perform a stage multiple times in our workflow. Visualize stage may be used multiple times.
Question and problem definition
The original data set used in this notebook can be found here at Kaggle.
Knowing from a training set of samples listing movies and their IMDb scores, can our model determine based on a given test dataset not containing the scores, if the movies in the test dataset scored highly or not?
Workflow goals
The data science solutions workflow solves for seven major goals.
Classifying. We may want to classify or categorize our samples. We may also want to understand the implications or correlation of different classes with our solution goal.
Correlating. One can approach the problem based on available features within the training dataset. Which features within the dataset contribute significantly to our solution goal? Statistically speaking is there a correlation among a feature and solution goal? As the feature values change does the solution state change as well, and visa-versa? This can be tested both for numerical and categorical features in the given dataset. We may also want to determine correlation among features other than survival for subsequent goals and workflow stages. Correlating certain features may help in creating, completing, or correcting features.
Converting. For modeling stage, one needs to prepare the data. Depending on the choice of model algorithm one may require all features to be converted to numerical equivalent values. So for instance converting text categorical values to numeric values.
Completing. Data preparation may also require us to estimate any missing values within a feature. Model algorithms may work best when there are no missing values.
Correcting. We may also analyze the given training dataset for errors or possibly innacurate values within features and try to corrent these values or exclude the samples containing the errors. One way to do this is to detect any outliers among our samples or features. We may also completely discard a feature if it is not contribting to the analysis or may significantly skew the results.
Creating. Can we create new features based on an existing feature or a set of features, such that the new feature follows the correlation, conversion, completeness goals.
Charting. How to select the right visualization plots and charts depending on nature of the data and the solution goals. A good start is to read the Tableau paper on Which chart or graph is right for you?.
End of explanation
"""
df = pd.read_csv('../input/movie_metadata.csv')
# train_df, test_df = train_test_split(df, test_size = 0.2)
# test_actual = test_df['imdb_score']
# test_df = test_df.drop('imdb_score', axis=1)
# combine = [train_df, test_df]
"""
Explanation: Acquire data
The Python Pandas packages helps us work with our datasets. We start by acquiring the datasets into a Pandas DataFrame.
~~We will partition off 80% as our training data and 20% of the data as our test data. We also combine these datasets to run certain operations on both datasets together.~~
Let's move the partioning to after the data wrangling. Makes the code simpler, and doesn't make a real difference. Also removes any differences in the banding portions between runs.
End of explanation
"""
print(df.columns.values)
"""
Explanation: Analyze by describing data
Pandas also helps describe the datasets answering following questions early in our project.
Which features are available in the dataset?
Noting the feature names for directly manipulating or analyzing these. These feature names are described on the Kaggle page here.
End of explanation
"""
pd.set_option('display.max_columns', 50)
# preview the data
df.head()
incomplete = df.columns[pd.isnull(df).any()].tolist()
df[incomplete].info()
"""
Explanation: Which features are categorical?
Color, Director name, Actor 1 name, Actor 2 name, Actor 3 name, Genres, Language, Country, Content Rating, Movie title, Plot keywords, Movie IMDb link
Which features are numerical?
Number of critics for reviews, Duration, Director Facebook likes, Actor 1 Facebook likes, Actor 2 Facebook likes, Actor 3 Facebook likes, Gross, Number of voted users, Cast total Facebook likes, Number of faces in poster, Number of users for reviews, Budget, Title year, IMDb score, Aspect ratio, Movie Facebook likes
End of explanation
"""
df.info()
df.describe()
"""
Explanation: Which features contain blank, null or empty values?
These will require correcting.
color
director_name
num_critic_for_reviews
duration
director_facebook_likes
actor_3_facebook_likes
actor_2_name
actor_1_facebook_likes
gross
actor_1_name
actor_3_name
facenumber_in_poster
plot_keywords
num_user_for_reviews
language
country
content_rating
budget
title_year
actor_2_facebook_likes
aspect_ratio
What are the data types for various features?
Helping us during converting goal.
Twelve features are floats.
Nine features are strings (object).
End of explanation
"""
df.describe(include=['O'])
"""
Explanation: What is the distribution of categorical features?
End of explanation
"""
df.loc[ df['imdb_score'] < 7.0, 'imdb_score'] = 0
df.loc[ df['imdb_score'] >= 7.0, 'imdb_score'] = 1
df.head()
"""
Explanation: Transformation of IMDb score
Let's simplify this problem into a binary classification/regression. Let us treat all movies with an IMDb score of 7.0 or higher as "good" (with a value of '1') and all below as "bad" (with a value of '0').
End of explanation
"""
df[['content_rating', 'imdb_score']].groupby(['content_rating'], as_index=False).mean().sort_values(by='imdb_score', ascending=False)
df[["color", "imdb_score"]].groupby(['color'], as_index=False).mean().sort_values(by='imdb_score', ascending=False)
df[["director_name", "imdb_score"]].groupby(['director_name'], as_index=False).mean().sort_values(by='imdb_score', ascending=False)
df[["country", "imdb_score"]].groupby(['country'], as_index=False).mean().sort_values(by='imdb_score', ascending=False)
"""
Explanation: Assumtions based on data analysis
We arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions.
Correlating.
We want to know how well does each feature correlate with IMDb score. We want to do this early in our project and match these quick correlations with modelled correlations later in the project.
Completing.
Correcting.
Creating.
Classifying.
Analyze by pivoting features
To confirm some of our observations and assumptions, we can quickly analyze our feature correlations by pivoting features against each other. We can only do so at this stage for features which do not have any empty values. It also makes sense doing so only for features which are categorical (Sex), ordinal (Pclass) or discrete (SibSp, Parch) type.
Pclass We observe significant correlation (>0.5) among Pclass=1 and Survived (classifying #3). We decide to include this feature in our model.
Sex We confirm the observation during problem definition that Sex=female had very high survival rate at 74% (classifying #1).
SibSp and Parch These features have zero correlation for certain values. It may be best to derive a feature or a set of features from these individual features (creating #1).
End of explanation
"""
g = sns.FacetGrid(df, col='imdb_score')
g.map(plt.hist, 'title_year', bins=20)
"""
Explanation: Analyze by visualizing data
Now we can continue confirming some of our assumptions using visualizations for analyzing the data.
Correlating numerical features
Let us start by understanding correlations between numerical features and our solution goal (IMDb score).
Observations.
Decisions.
End of explanation
"""
# grid = sns.FacetGrid(df, col='Pclass', hue='Survived')
grid = sns.FacetGrid(df, col='imdb_score', row='color', size=2.2, aspect=1.6)
grid.map(plt.hist, 'title_year', alpha=.5, bins=20)
grid.add_legend();
"""
Explanation: Correlating numerical and ordinal features
We can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values.
Observations.
Decisions.
End of explanation
"""
# grid = sns.FacetGrid(df, col='Embarked')
# grid = sns.FacetGrid(df, row='Embarked', size=2.2, aspect=1.6)
# grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')
# grid.add_legend()
"""
Explanation: Correlating categorical features
Now we can correlate categorical features with our solution goal.
Observations.
Decisions.
End of explanation
"""
# grid = sns.FacetGrid(df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'})
# grid = sns.FacetGrid(df, row='Embarked', col='Survived', size=2.2, aspect=1.6)
# grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
# grid.add_legend()
"""
Explanation: Correlating categorical and numerical features
Observations.
Decisions.
End of explanation
"""
print("Before", df.shape)
df = df.drop(['genres', 'movie_title', 'plot_keywords', 'movie_imdb_link'], axis=1)
"After", df.shape
"""
Explanation: Wrangle data
We have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals.
Correcting by dropping features
This is a good starting goal to execute. By dropping features we are dealing with fewer data points. Speeds up our notebook and eases the analysis.
Based on our assumptions and decisions we want to drop the actor_2_name, genres, actor_1_name, movie_title, actor_3_name, plot_keywords, movie_imdb_link features.
Note that where applicable we perform operations on both training and testing datasets together to stay consistent.
End of explanation
"""
actors = {}
directors = {}
for index, row in df.iterrows():
for actor in row[['actor_1_name', 'actor_2_name', 'actor_3_name']]:
if actor is not np.nan:
if actor not in actors:
actors[actor] = 0
actors[actor] += 1
director = row['director_name']
if director is not np.nan:
if director not in directors:
directors[director] = 0
directors[director] += 1
df['num_of_films_director'] = df["director_name"].dropna().map(directors).astype(int)
df['num_of_films_director'] = df['num_of_films_director'].fillna(1)
df['NumFilmsBand'] = pd.cut(df['num_of_films_director'], 4)
df[['NumFilmsBand', 'imdb_score']].groupby(['NumFilmsBand'], as_index=False).mean().sort_values(by='NumFilmsBand', ascending=True)
df.loc[ df['num_of_films_director'] <= 7, 'num_of_films_director'] = 0
df.loc[(df['num_of_films_director'] > 7) & (df['num_of_films_director'] <= 13), 'num_of_films_director'] = 1
df.loc[(df['num_of_films_director'] > 13) & (df['num_of_films_director'] <= 19), 'num_of_films_director'] = 2
df.loc[ df['num_of_films_director'] > 19, 'num_of_films_director'] = 3
df.head()
"""
Explanation: Creating new feature extracting from existing
We want to analyze if the Director and Actor name features can be engineered to extract number of films starred or directed and test correlation between number of films and score, before dropping Director and Actor name features.
In the following code we extract the num_of_films_director and num_of_films_actor features by iterating over the dataframe.
If the director name field is empty, we fill the field with 1.
Observations.
When we plot the number of films directed, and number of films acted in, we note the following observations.
Directors with more films under their belt tend to have a higher success rate. It seems practice does make perfect.
Films where the actors have a higher number of combined films have a higher success rate. A more experienced cast, a better movie.
Decision.
We decide to band the directors into groups by number of films directed.
We decide to band the "total combined films acted in" into groups.
End of explanation
"""
df = df.drop(['director_name', 'NumFilmsBand'], axis=1)
"""
Explanation: We can now remove the director_name and num_of_films_director_band features.
End of explanation
"""
df["actor_1_name"].dropna().map(actors).describe()
df['num_of_films_actor_1'] = df["actor_1_name"].dropna().map(actors).astype(int)
df['num_of_films_actor_1'] = df['num_of_films_actor_1'].fillna(8)
df["actor_2_name"].dropna().map(actors).describe()
df['num_of_films_actor_2'] = df["actor_2_name"].dropna().map(actors).astype(int)
df['num_of_films_actor_2'] = df['num_of_films_actor_2'].fillna(4)
df["actor_3_name"].dropna().map(actors).describe()
df['num_of_films_actor_3'] = df["actor_3_name"].dropna().map(actors).astype(int)
df['num_of_films_actor_3'] = df['num_of_films_actor_3'].fillna(2)
df['actor_sum'] = df["num_of_films_actor_1"] + df["num_of_films_actor_2"] + df["num_of_films_actor_3"]
df['ActorSumBand'] = pd.cut(df['actor_sum'], 5)
df[['ActorSumBand', 'imdb_score']].groupby(['ActorSumBand'], as_index=False).mean().sort_values(by='ActorSumBand', ascending=True)
df.loc[ df['actor_sum'] <= 24, 'actor_sum'] = 0
df.loc[(df['actor_sum'] > 24) & (df['actor_sum'] <= 46), 'actor_sum'] = 1
df.loc[(df['actor_sum'] > 46) & (df['actor_sum'] <= 67), 'actor_sum'] = 2
df.loc[(df['actor_sum'] > 67) & (df['actor_sum'] <= 89), 'actor_sum'] = 3
df.loc[ df['actor_sum'] > 89, 'actor_sum'] = 4
df.head()
"""
Explanation: Now, let's examine actors by number of films acted in. Since we have three actors listed per film, we'll need to combine these numbers. Let's fill any empty fields with the median value for that field, and sum the columns.
End of explanation
"""
df = df.drop(['actor_1_name', 'num_of_films_actor_1', 'actor_2_name', 'num_of_films_actor_2', 'actor_3_name', 'num_of_films_actor_3', 'ActorSumBand'], axis=1)
df.head()
"""
Explanation: Now we can remove actor_1_name, actor_2_name, actor_3_name and ActorSumBand
End of explanation
"""
df['color'] = df['color'].fillna("Color")
df['color'] = df['color'].map( {' Black and White': 1, 'Color': 0} ).astype(int)
df.head()
"""
Explanation: Converting a categorical feature
Now we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal.
Let us start by converting Color feature to a new feature called Color where black and white=1 and color=0.
Since some values are null, let's fill them with the most common value, Color
End of explanation
"""
df['language'].value_counts()
"""
Explanation: Next, let's look at the language and country features
End of explanation
"""
df['language'] = df['language'].fillna("English")
df['language'] = df['language'].map(lambda l: 0 if l == 'English' else 1)
df.head()
"""
Explanation: The bulk of the films are in English. Let's convert this field to 1 for Non-English, and 0 for English
First, let's fill any null values with English
End of explanation
"""
df['country'].value_counts()
"""
Explanation: Next, let's explore country
End of explanation
"""
df['country'] = df['country'].fillna("USA")
df['country'] = df['country'].map(lambda c: 0 if c == 'USA' else 1)
df.head()
"""
Explanation: Again, most films are from USA. Taking the same approach, we'll fill NaNs with USA, and transfrom USA to 0, all others to 1
End of explanation
"""
df['content_rating'].value_counts()
"""
Explanation: Next up is content rating. Let's look at that
End of explanation
"""
df['content_rating'] = df['content_rating'].map({'G':0, 'PG':1, 'PG-13': 2, 'R': 3}).fillna(4).astype(int)
df.head()
"""
Explanation: The majority of the films use the standard MPAA ratings: G, PG, PG-13, and R
Let's group the rest of the films (and null values) into the 'Not Rated' category, and then transform them to integers
End of explanation
"""
df['aspect_ratio'].value_counts()
"""
Explanation: Aspect ratio may seem like a numerical feature, but it's somewhat of a categorial one. First, what values do we find in the dataset?
End of explanation
"""
df['aspect_ratio'] = df['aspect_ratio'].fillna(2.35)
df['aspect_ratio'] = df['aspect_ratio'].map(lambda ar: 1.33 if ar == 4.00 else ar)
df['aspect_ratio'] = df['aspect_ratio'].map(lambda ar: 1.78 if ar == 16.00 else ar)
df[['aspect_ratio', 'imdb_score']].groupby(pd.cut(df['aspect_ratio'], 4)).mean()
"""
Explanation: Some of these values seem to be in the wrong format, 16.00 is most likely 16:9 (1.78) and 4.00 is more likely 4:3 (1.33). Let's fix those.
End of explanation
"""
df.loc[ df['aspect_ratio'] <= 1.575, 'aspect_ratio'] = 0
df.loc[(df['aspect_ratio'] > 1.575) & (df['aspect_ratio'] <= 1.97), 'aspect_ratio'] = 1
df.loc[(df['aspect_ratio'] > 1.97) & (df['aspect_ratio'] <= 2.365), 'aspect_ratio'] = 2
df.loc[ df['aspect_ratio'] > 2.365, 'aspect_ratio'] = 3
df.head()
"""
Explanation: The above banding looks good. It sepearates out the two predominant aspect ratios (2.35 and 1.85), and also has two bands below and above these ratios. Let's use that.
End of explanation
"""
mean = df['duration'].mean()
std = df['duration'].std()
mean, std
df['duration'] = df['duration'].map(lambda v: truncnorm.rvs(-1, 1, loc=mean, scale=std) if pd.isnull(v) else v)
"""
Explanation: Completing a numerical continuous feature
Now we should start estimating and completing features with missing or null values. We will first do this for the Duration feature.
We can consider three methods to complete a numerical continuous feature.
The easist way is to use the median value.
Another simple way is to generate random numbers between mean and standard deviation.
More accurate way of guessing missing values is to use other correlated features, and use the median value based on other features.
Combine methods 1 and 2. So instead of guessing age values based on median, use random numbers between mean and standard deviation, based on sets of Pclass and Gender combinations.
We will use method 2.
End of explanation
"""
df[['duration', 'imdb_score']].groupby(pd.qcut(df['duration'], 5)).mean()
"""
Explanation: Let us create Duration bands and determine correlations with IMDb score.
End of explanation
"""
df.loc[ df['duration'] <= 91, 'duration'] = 0
df.loc[(df['duration'] > 91) & (df['duration'] <= 99), 'duration'] = 1
df.loc[(df['duration'] > 99) & (df['duration'] <= 108), 'duration'] = 2
df.loc[(df['duration'] > 108) & (df['duration'] <= 122), 'duration'] = 3
df.loc[ df['duration'] > 122, 'duration'] = 4
df.head()
"""
Explanation: Let us replace Duration with ordinals based on these bands.
End of explanation
"""
mean = df['num_critic_for_reviews'].mean()
std = df['num_critic_for_reviews'].std()
mean, std
df['num_critic_for_reviews'] = df['num_critic_for_reviews'].map(lambda v: truncnorm.rvs(-1, 1, loc=mean, scale=std) if pd.isnull(v) else v)
df[['num_critic_for_reviews', 'imdb_score']].groupby(pd.qcut(df['num_critic_for_reviews'], 5)).mean()
df.loc[ df['num_critic_for_reviews'] <= 40, 'num_critic_for_reviews'] = 0
df.loc[(df['num_critic_for_reviews'] > 40) & (df['num_critic_for_reviews'] <= 84), 'num_critic_for_reviews'] = 1
df.loc[(df['num_critic_for_reviews'] > 84) & (df['num_critic_for_reviews'] <= 140), 'num_critic_for_reviews'] = 2
df.loc[(df['num_critic_for_reviews'] > 140) & (df['num_critic_for_reviews'] <= 222), 'num_critic_for_reviews'] = 3
df.loc[ df['num_critic_for_reviews'] > 222, 'num_critic_for_reviews'] = 4
df.head()
"""
Explanation: Let's apply the same techniques to the following features:
num_critic_for_reviews
director_facebook_likes
actor_1_facebook_likes
actor_2_facebook_likes
actor_3_facebook_likes
gross
facenumber_in_poster
num_user_for_reviews
budget
title_year
num_voted_users
cast_total_facebook_likes
movie_facebook_likes
num_of_films_director
num_critic_for_reviews
End of explanation
"""
mean = df['director_facebook_likes'].mean()
std = df['director_facebook_likes'].std()
mean, std
"""
Explanation: director_facebook_likes
End of explanation
"""
df['director_facebook_likes'] = df['director_facebook_likes'].map(lambda v: mean if pd.isnull(v) else v)
df[['director_facebook_likes', 'imdb_score']].groupby(pd.qcut(df['director_facebook_likes'], 5)).mean()
df.loc[ df['director_facebook_likes'] <= 3, 'director_facebook_likes'] = 0
df.loc[(df['director_facebook_likes'] > 3) & (df['director_facebook_likes'] <= 27.8), 'director_facebook_likes'] = 1
df.loc[(df['director_facebook_likes'] > 27.8) & (df['director_facebook_likes'] <= 91), 'director_facebook_likes'] = 2
df.loc[(df['director_facebook_likes'] > 91) & (df['director_facebook_likes'] <= 309), 'director_facebook_likes'] = 3
df.loc[ df['director_facebook_likes'] > 309, 'director_facebook_likes'] = 4
df.head()
"""
Explanation: Since the standard deviation for this field is ~4x the mean, we'll just stick to using the mean value for nulls
End of explanation
"""
mean = df['actor_1_facebook_likes'].mean()
std = df['actor_1_facebook_likes'].std()
mean, std
df['actor_1_facebook_likes'] = df['actor_1_facebook_likes'].map(lambda v: mean if pd.isnull(v) else v)
df['actor_1_facebook_likes'].describe()
df[['actor_1_facebook_likes', 'imdb_score']].groupby(pd.qcut(df['actor_1_facebook_likes'], 5)).mean()
df.loc[ df['actor_1_facebook_likes'] <= 523, 'actor_1_facebook_likes'] = 0
df.loc[(df['actor_1_facebook_likes'] > 523) & (df['actor_1_facebook_likes'] <= 865), 'actor_1_facebook_likes'] = 1
df.loc[(df['actor_1_facebook_likes'] > 865) & (df['actor_1_facebook_likes'] <= 2000), 'actor_1_facebook_likes'] = 2
df.loc[(df['actor_1_facebook_likes'] > 2000) & (df['actor_1_facebook_likes'] <= 13000), 'actor_1_facebook_likes'] = 3
df.loc[ df['actor_1_facebook_likes'] > 13000, 'actor_1_facebook_likes'] = 4
df.head()
"""
Explanation: actor_1_facebook_likes
End of explanation
"""
mean = df['actor_2_facebook_likes'].mean()
std = df['actor_2_facebook_likes'].std()
mean, std
df['actor_2_facebook_likes'] = df['actor_2_facebook_likes'].map(lambda v: mean if pd.isnull(v) else v)
df[['actor_2_facebook_likes', 'imdb_score']].groupby(pd.qcut(df['actor_2_facebook_likes'], 5)).mean()
df.loc[ df['actor_2_facebook_likes'] <= 218, 'actor_2_facebook_likes'] = 0
df.loc[(df['actor_2_facebook_likes'] > 218) & (df['actor_2_facebook_likes'] <= 486), 'actor_2_facebook_likes'] = 1
df.loc[(df['actor_2_facebook_likes'] > 486) & (df['actor_2_facebook_likes'] <= 726.2), 'actor_2_facebook_likes'] = 2
df.loc[(df['actor_2_facebook_likes'] > 726.2) & (df['actor_2_facebook_likes'] <= 979), 'actor_2_facebook_likes'] = 3
df.loc[ df['actor_2_facebook_likes'] > 979, 'actor_2_facebook_likes'] = 4
df.head()
"""
Explanation: actor_2_facebook_likes
End of explanation
"""
mean = df['actor_3_facebook_likes'].mean()
std = df['actor_3_facebook_likes'].std()
mean, std
df['actor_3_facebook_likes'] = df['actor_3_facebook_likes'].map(lambda v: mean if pd.isnull(v) else v)
df['actor_3_facebook_likes'].describe()
df[['actor_3_facebook_likes', 'imdb_score']].groupby(pd.qcut(df['actor_3_facebook_likes'], 5)).mean()
df.loc[ df['actor_3_facebook_likes'] <= 97, 'actor_3_facebook_likes'] = 0
df.loc[(df['actor_3_facebook_likes'] > 97) & (df['actor_3_facebook_likes'] <= 265), 'actor_3_facebook_likes'] = 1
df.loc[(df['actor_3_facebook_likes'] > 265) & (df['actor_3_facebook_likes'] <= 472), 'actor_3_facebook_likes'] = 2
df.loc[(df['actor_3_facebook_likes'] > 472) & (df['actor_3_facebook_likes'] <= 700), 'actor_3_facebook_likes'] = 3
df.loc[ df['actor_3_facebook_likes'] > 700, 'actor_3_facebook_likes'] = 4
df.head()
"""
Explanation: actor_3_facebook_likes
End of explanation
"""
mean = df['gross'].mean()
std = df['gross'].std()
mean, std
df['gross'] = df['gross'].map(lambda v: mean if pd.isnull(v) else v)
df['gross'].describe()
df[['gross', 'imdb_score']].groupby(pd.qcut(df['gross'], 5)).mean()
df.loc[ df['gross'] <= 4909758.4, 'gross'] = 0
df.loc[(df['gross'] > 4909758.4) & (df['gross'] <= 24092475.2), 'gross'] = 1
df.loc[(df['gross'] > 24092475.2) & (df['gross'] <= 48468407.527), 'gross'] = 2
df.loc[(df['gross'] > 48468407.527) & (df['gross'] <= 64212162.4), 'gross'] = 3
df.loc[ df['gross'] > 64212162.4, 'gross'] = 4
df.head()
"""
Explanation: gross
End of explanation
"""
mean = df['facenumber_in_poster'].mean()
std = df['facenumber_in_poster'].std()
mean, std
df['facenumber_in_poster'].value_counts()
df['facenumber_in_poster'].median()
df['facenumber_in_poster'] = df['facenumber_in_poster'].map(lambda v: 1 if pd.isnull(v) else v)
df['facenumber_in_poster'].describe()
df[['facenumber_in_poster', 'imdb_score']].groupby(pd.cut(df['facenumber_in_poster'], [-1,0,1,2,100])).mean()
df.loc[ df['facenumber_in_poster'] <= 0, 'facenumber_in_poster'] = 0
df.loc[(df['facenumber_in_poster'] > 0) & (df['facenumber_in_poster'] <= 1), 'facenumber_in_poster'] = 1
df.loc[(df['facenumber_in_poster'] > 1) & (df['facenumber_in_poster'] <= 2), 'facenumber_in_poster'] = 2
df.loc[ df['facenumber_in_poster'] > 2, 'facenumber_in_poster'] = 3
df.head()
"""
Explanation: facenumber_in_poster
End of explanation
"""
mean = df['num_user_for_reviews'].mean()
std = df['num_user_for_reviews'].std()
mean, std
df['num_user_for_reviews'] = df['num_user_for_reviews'].map(lambda v: mean if pd.isnull(v) else v)
df['num_user_for_reviews'].describe()
df[['num_user_for_reviews', 'imdb_score']].groupby(pd.qcut(df['num_user_for_reviews'], 5)).mean()
df.loc[ df['num_user_for_reviews'] <= 48, 'num_user_for_reviews'] = 0
df.loc[(df['num_user_for_reviews'] > 48) & (df['num_user_for_reviews'] <= 116), 'num_user_for_reviews'] = 1
df.loc[(df['num_user_for_reviews'] > 116) & (df['num_user_for_reviews'] <= 210), 'num_user_for_reviews'] = 2
df.loc[(df['num_user_for_reviews'] > 210) & (df['num_user_for_reviews'] <= 389), 'num_user_for_reviews'] = 3
df.loc[ df['num_user_for_reviews'] > 389, 'num_user_for_reviews'] = 4
df.head()
"""
Explanation: num_user_for_reviews
End of explanation
"""
mean = df['budget'].mean()
std = df['budget'].std()
mean, std
df['budget'] = df['budget'].map(lambda v: mean if pd.isnull(v) else v)
df['budget'].describe()
df[['budget', 'imdb_score']].groupby(pd.qcut(df['budget'], 3)).mean()
df.loc[ df['budget'] <= 12000000, 'budget'] = 0
df.loc[(df['budget'] > 12000000) & (df['budget'] <= 39752620.436), 'budget'] = 1
df.loc[ df['budget'] > 39752620.436, 'budget'] = 2
df.head()
"""
Explanation: budget
End of explanation
"""
mean = df['title_year'].mean()
std = df['title_year'].std()
mean, std
df['title_year'] = df['title_year'].map(lambda v: truncnorm.rvs(-1, 1, loc=mean, scale=std) if pd.isnull(v) else v)
df[['title_year', 'imdb_score']].groupby(pd.cut(df['title_year'], 5)).mean()
df.loc[ df['title_year'] <= 1936, 'title_year'] = 0
df.loc[(df['title_year'] > 1936) & (df['title_year'] <= 1956), 'title_year'] = 1
df.loc[(df['title_year'] > 1956) & (df['title_year'] <= 1976), 'title_year'] = 2
df.loc[(df['title_year'] > 1976) & (df['title_year'] <= 1996), 'title_year'] = 3
df.loc[ df['title_year'] > 1996, 'title_year'] = 4
df.head()
"""
Explanation: title_year
End of explanation
"""
mean = df['num_voted_users'].mean()
std = df['num_voted_users'].std()
mean, std
df['num_voted_users'] = df['num_voted_users'].map(lambda v: mean if pd.isnull(v) else v)
df['num_voted_users'].describe()
df[['num_voted_users', 'imdb_score']].groupby(pd.qcut(df['num_voted_users'], 5)).mean()
df.loc[ df['num_voted_users'] <= 5623.8, 'num_voted_users'] = 0
df.loc[(df['num_voted_users'] > 5623.8) & (df['num_voted_users'] <= 21478.4), 'num_voted_users'] = 1
df.loc[(df['num_voted_users'] > 21478.4) & (df['num_voted_users'] <= 53178.2), 'num_voted_users'] = 2
df.loc[(df['num_voted_users'] > 53178.2) & (df['num_voted_users'] <= 1.24e+05), 'num_voted_users'] = 3
df.loc[ df['num_voted_users'] > 1.24e+05, 'num_voted_users'] = 4
df.head()
"""
Explanation: num_voted_users
End of explanation
"""
mean = df['cast_total_facebook_likes'].mean()
std = df['cast_total_facebook_likes'].std()
mean, std
df['cast_total_facebook_likes'] = df['cast_total_facebook_likes'].map(lambda v: mean if pd.isnull(v) else v)
df['cast_total_facebook_likes'].describe()
df[['cast_total_facebook_likes', 'imdb_score']].groupby(pd.qcut(df['cast_total_facebook_likes'], 5)).mean()
df.loc[ df['cast_total_facebook_likes'] <= 1136, 'cast_total_facebook_likes'] = 0
df.loc[(df['cast_total_facebook_likes'] > 1136) & (df['cast_total_facebook_likes'] <= 2366.6), 'cast_total_facebook_likes'] = 1
df.loc[(df['cast_total_facebook_likes'] > 2366.6) & (df['cast_total_facebook_likes'] <= 4369.2), 'cast_total_facebook_likes'] = 2
df.loc[(df['cast_total_facebook_likes'] > 4369.2) & (df['cast_total_facebook_likes'] <= 16285.8), 'cast_total_facebook_likes'] = 3
df.loc[ df['cast_total_facebook_likes'] > 16285.8, 'cast_total_facebook_likes'] = 4
df.head()
"""
Explanation: cast_total_facebook_likes
End of explanation
"""
mean = df['movie_facebook_likes'].mean()
std = df['movie_facebook_likes'].std()
mean, std
df['movie_facebook_likes'] = df['movie_facebook_likes'].map(lambda v: mean if pd.isnull(v) else v)
df['movie_facebook_likes'].describe()
df[df['movie_facebook_likes'] > 0][['movie_facebook_likes', 'imdb_score']].groupby(pd.qcut(df[df['movie_facebook_likes'] > 0]['movie_facebook_likes'], 4)).mean()
df.loc[ df['movie_facebook_likes'] <= 0, 'movie_facebook_likes'] = 0
df.loc[(df['movie_facebook_likes'] > 0) & (df['movie_facebook_likes'] <= 401), 'movie_facebook_likes'] = 1
df.loc[(df['movie_facebook_likes'] > 401) & (df['movie_facebook_likes'] <= 1000), 'movie_facebook_likes'] = 2
df.loc[(df['movie_facebook_likes'] > 1000) & (df['movie_facebook_likes'] <= 17000), 'movie_facebook_likes'] = 3
df.loc[ df['movie_facebook_likes'] > 17000, 'movie_facebook_likes'] = 3
df.head()
"""
Explanation: movie_facebook_likes
End of explanation
"""
mean = df['num_of_films_director'].mean()
std = df['num_of_films_director'].std()
mean, std
df['num_of_films_director'].value_counts()
df['num_of_films_director'] = df['num_of_films_director'].map(lambda v: 1 if pd.isnull(v) else v)
df[['num_of_films_director', 'imdb_score']].groupby(pd.cut(df['num_of_films_director'], 3)).mean()
df.loc[ df['num_of_films_director'] <= 1, 'num_of_films_director'] = 0
df.loc[(df['num_of_films_director'] > 1) & (df['num_of_films_director'] <= 2), 'num_of_films_director'] = 1
df.loc[ df['num_of_films_director'] > 2, 'num_of_films_director'] = 2
df.head()
incomplete = df.columns[pd.isnull(df).any()].tolist()
df[incomplete].info()
"""
Explanation: num_of_films_director
End of explanation
"""
train_df, test_df = train_test_split(df, test_size = 0.2)
"""
Explanation: Create new feature combining existing features
Completing a categorical feature
Converting categorical feature to numeric
Quick completing and converting a numeric feature
Partion Data
Now, we randomly partion our dataset into two DataFrames. 80% of the data will be our training set, the rest will become our test set
End of explanation
"""
X_train = train_df.drop("imdb_score", axis=1)
Y_train = train_df["imdb_score"]
X_test = test_df.drop("imdb_score", axis=1)
Y_test = test_df["imdb_score"]
X_train.shape, Y_train.shape, X_test.shape, Y_test.shape
"""
Explanation: Model, predict and solve
Now we are ready to train a model and predict the required solution. There are 60+ predictive modelling algorithms to choose from. We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate. Our problem is a classification and regression problem. We want to identify relationship between output (Survived or not) with other variables or features (Gender, Age, Port...). We are also perfoming a category of machine learning which is called supervised learning as we are training our model with a given dataset. With these two criteria - Supervised Learning plus Classification and Regression, we can narrow down our choice of models to a few. These include:
Logistic Regression
KNN or k-Nearest Neighbors
Support Vector Machines
Naive Bayes classifier
Decision Tree
Random Forrest
Perceptron
Artificial neural network
RVM or Relevance Vector Machine
End of explanation
"""
# Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
"""
Explanation: Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference Wikipedia.
Note the confidence score generated by the model based on our training dataset.
End of explanation
"""
for i, value in enumerate(train_df.columns):
print i, value
coeff_df = pd.DataFrame(train_df.columns.delete(17))
coeff_df.columns = ['Feature']
coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by='Correlation', ascending=False)
"""
Explanation: We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the coefficient of the features in the decision function.
Positive coefficients increase the log-odds of the response (and thus increase the probability), and negative coefficients decrease the log-odds of the response (and thus decrease the probability).
Country is highest positivie coefficient, implying as the Country value increases (USA: 0 to Foreign: 1), the probability of IMDb score = 1 increases the most.
Inversely as Aspect Ratio increases, probability of IMDb score = 1 decreases the most.
Director Number of Films is a good artificial feature to model as it has a 0.2 positive coorelation with IMDb score.
So is Color as second highest positive correlation.
End of explanation
"""
# Support Vector Machines
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
"""
Explanation: Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference Wikipedia.
Note that the model generates a confidence score which is higher than Logistics Regression model.
End of explanation
"""
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
"""
Explanation: In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference Wikipedia.
KNN confidence score is better than Logistics Regression and SVM.
End of explanation
"""
# Gaussian Naive Bayes
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian
"""
Explanation: In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference Wikipedia.
The model generated confidence score is the lowest among the models evaluated so far.
End of explanation
"""
# Perceptron
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred = perceptron.predict(X_test)
acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)
acc_perceptron
# Linear SVC
linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc
# Stochastic Gradient Descent
sgd = SGDClassifier()
sgd.fit(X_train, Y_train)
Y_pred = sgd.predict(X_test)
acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)
acc_sgd
"""
Explanation: The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference Wikipedia.
End of explanation
"""
# Decision Tree
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree
"""
Explanation: This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference Wikipedia.
The model confidence score is the highest among models evaluated so far.
End of explanation
"""
# Random Forest
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
"""
Explanation: The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference Wikipedia.
The model confidence score is the highest among models evaluated so far. We decide to use this model's output (Y_pred) for creating our competition submission of results.
End of explanation
"""
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression',
'Random Forest', 'Naive Bayes', 'Perceptron',
'Stochastic Gradient Decent', 'Linear SVC',
'Decision Tree'],
'Score': [acc_svc, acc_knn, acc_log,
acc_random_forest, acc_gaussian, acc_perceptron,
acc_sgd, acc_linear_svc, acc_decision_tree]})
models.sort_values(by='Score', ascending=False)
print accuracy_score(Y_test, Y_pred, normalize=False), '/', len(Y_test)
print accuracy_score(Y_test, Y_pred)
"""
Explanation: Model evaluation
We can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set.
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/community/gapic/automl/showcase_automl_tabular_classification_online_explain.ipynb
|
apache-2.0
|
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex client library: AutoML tabular classification model for online prediction with explanation
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_classification_online_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_classification_online_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create tabular classification models and do online prediction with explanation using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the Iris dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor.
Objective
In this tutorial, you create an AutoML tabular classification model and deploy for online prediction with explainability from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction with explainability.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
import time
import google.cloud.aiplatform_v1beta1 as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
"""
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
"""
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
"""
# Tabular Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml"
# Tabular Labeling type
LABEL_SCHEMA = (
"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml"
)
# Tabular Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml"
"""
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
"""
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
"""
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
End of explanation
"""
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
"""
Explanation: Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs
End of explanation
"""
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
"""
Explanation: Tutorial
Now you are ready to start creating your own AutoML tabular classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
End of explanation
"""
IMPORT_FILE = "gs://cloud-samples-data/tables/iris_1000.csv"
"""
Explanation: Dataset
Now that your clients are ready, your first step is to create a Dataset resource instance. This step differs from Vision, Video and Language. For those products, after the Dataset resource is created, one then separately imports the data, using the import_data method.
For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the import_data method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the Dataset resource's metadata.
Cloud Storage
metadata = {"input_config": {"gcs_source": {"uri": [gcs_uri]}}}
The format for a Cloud Storage path is:
gs://[bucket_name]/[folder(s)/[file]
BigQuery
metadata = {"input_config": {"bigquery_source": {"uri": [gcs_uri]}}}
The format for a BigQuery path is:
bq://[collection].[dataset].[table]
Note that the uri field is a list, whereby you can input multiple CSV files or BigQuery tables when your data is split across files.
Data preparation
The Vertex Dataset resource for tabular has a couple of requirements for your tabular data.
Must be in a CSV file or a BigQuery query.
CSV
For tabular classification, the CSV file has a few requirements:
The first row must be the heading -- note how this is different from Vision, Video and Language where the requirement is no heading.
All but one column are features.
One column is the label, which you will specify when you subsequently create the training pipeline.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
"""
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
"""
Explanation: Quick peek at your data
You will use a version of the Iris dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
You also need for training to know the heading name of the label column, which is save as label_column. For this dataset, it is the last column in the CSV file.
End of explanation
"""
TIMEOUT = 90
def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
if src_uri.startswith("gs://"):
metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}}
elif src_uri.startswith("bq://"):
metadata = {"input_config": {"bigquery_source": {"uri": [src_uri]}}}
dataset = aip.Dataset(
display_name=name,
metadata_schema_uri=schema,
labels=labels,
metadata=json_format.ParseDict(metadata, Value()),
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("iris-" + TIMESTAMP, DATA_SCHEMA, src_uri=IMPORT_FILE)
"""
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
metadata: The Cloud Storage or BigQuery location of the tabular data.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
"""
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
"""
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
"""
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
"""
Explanation: Train the model
Now train an AutoML tabular classification model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
"""
TRANSFORMATIONS = [
{"auto": {"column_name": "sepal_width"}},
{"auto": {"column_name": "sepal_length"}},
{"auto": {"column_name": "petal_length"}},
{"auto": {"column_name": "petal_width"}},
]
PIPE_NAME = "iris_pipe-" + TIMESTAMP
MODEL_NAME = "iris_model-" + TIMESTAMP
task = Value(
struct_value=Struct(
fields={
"target_column": Value(string_value=label_column),
"prediction_type": Value(string_value="classification"),
"train_budget_milli_node_hours": Value(number_value=1000),
"disable_early_stopping": Value(bool_value=False),
"transformations": json_format.ParseDict(TRANSFORMATIONS, Value()),
}
)
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
"""
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are:
prediction_type: Whether we are doing "classification" or "regression".
target_column: The CSV heading column name for the column we want to predict (i.e., the label).
train_budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.
disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.
transformations: Specifies the feature engineering for each feature column.
For transformations, the list must have an entry for each column. The outer key field indicates the type of feature engineering for the corresponding column. In this tutorial, you set it to "auto" to tell AutoML to automatically determine it.
Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
"""
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
"""
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
"""
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
"""
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
"""
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
"""
Explanation: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
"""
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("logloss", metrics["logLoss"])
print("auPrc", metrics["auPrc"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
"""
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (logLoss and auPrc) you will print the result.
End of explanation
"""
ENDPOINT_NAME = "iris_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
"""
Explanation: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
"""
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
"""
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
"""
MIN_NODES = 1
MAX_NODES = 1
"""
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
"""
DEPLOYED_NAME = "iris_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"enable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
"""
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the model to the endpoint you created for serving predictions, with the following parameters:
model: The Vertex fully qualified identifier of the Model resource to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to to.
deployed_model: The requirements for deploying the model.
traffic_split: Percent of traffic at endpoint that goes to this model, which is specified as a dictioney of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then specify as, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100. { "0": percent, model_id: percent, ... }
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified identifier of the (upload) Model resource to deploy.
display_name: A human readable name for the deployed model.
dedicated_resources: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
min_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
enable_container_logging: This enables logging of container events, such as execution failures (default is container logging is disabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
"""
INSTANCE = {
"petal_length": "1.4",
"petal_width": "1.3",
"sepal_length": "5.1",
"sepal_width": "2.8",
}
"""
Explanation: Make a online prediction request with explainability
Now do a online prediction with explainability to your deployed model. In this method, the predicted response will include an explanation on how the features contributed to the explanation.
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
"""
def explain_item(
data_items, endpoint, parameters_dict, deployed_model_id, silent=False
):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [json_format.ParseDict(s, Value()) for s in data_items]
response = clients["prediction"].explain(
endpoint=endpoint,
instances=instances,
parameters=parameters,
deployed_model_id=deployed_model_id,
)
if silent:
return response
print("response")
print(" deployed_model_id:", response.deployed_model_id)
try:
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
except:
pass
explanations = response.explanations
print("explanations")
for explanation in explanations:
print(explanation)
return response
response = explain_item([INSTANCE], endpoint_id, None, None)
"""
Explanation: Make a prediction with explanation
Ok, now you have a test item. Use this helper function explain_item, which takes the following parameters:
data_items: The test tabular data items.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional filtering parameters for serving prediction results -- in your case you will pass None.
This function uses the prediction client service and calls the explain method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
instances: A list of instances (data items) to predict.
parameters: Additional filtering parameters for serving prediction results. Note, image segmentation models do not support additional parameters.
deployed_model_id: The Vertex fully qualified identifier for the deployed model, when more than one model is deployed at the endpoint. Otherwise, if only one model deployed, can be set to None.
Request
The format of each instance is:
{ 'content': text_item }
Since the explain() method can take multiple items (instances), you send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the explain() method.
Response
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction -- in this case there is just one:
deployed_model_id -- The Vertex fully qualified identifer for the Model resource that did the prediction/explanation.
predictions -- The predicated class and confidence level between 0 and 1.
confidences: Confidence level in the prediction.
displayNames: The predicted label.
explanations -- How each feature contributed to the prediction.
End of explanation
"""
import numpy as np
try:
predictions = response.predictions
label = np.argmax(predictions[0]["scores"])
cls = predictions[0]["classes"][label]
print("Predicted Value:", cls, predictions[0]["scores"][label])
except:
pass
"""
Explanation: Understanding the explanations response
First, you will look what your model predicted and compare it to the actual value.
End of explanation
"""
from tabulate import tabulate
feature_names = ["petal_length", "petal_width", "sepal_length", "sepal_width"]
attributions = response.explanations[0].attributions[0].feature_attributions
rows = []
for i, val in enumerate(feature_names):
rows.append([val, INSTANCE[val], attributions[val]])
print(tabulate(rows, headers=["Feature name", "Feature value", "Attribution value"]))
"""
Explanation: Examine feature attributions
Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
End of explanation
"""
import random
# Prepare 10 test examples to your model for prediction using a random distribution to generate
# test instances
instances = []
for i in range(10):
pl = str(random.uniform(1.0, 2.0))
pw = str(random.uniform(1.0, 2.0))
sl = str(random.uniform(4.0, 6.0))
sw = str(random.uniform(2.0, 4.0))
instances.append(
{"petal_length": pl, "petal_width": pw, "sepal_length": sl, "sepal_width": sw}
)
response = explain_item(instances, endpoint_id, None, None, silent=True)
"""
Explanation: Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the sanity_check_explanations method.
Get explanations
End of explanation
"""
def sanity_check_explanations(
explanation, prediction, mean_tgt_value=None, variance_tgt_value=None
):
passed_test = 0
total_test = 1
# `attributions` is a dict where keys are the feature names
# and values are the feature attributions for each feature
baseline_score = explanation.attributions[0].baseline_output_value
print("baseline:", baseline_score)
# Sanity check 1
# The prediction at the input is equal to that at the baseline.
# Please use a different baseline. Some suggestions are: random input, training
# set mean.
if abs(prediction - baseline_score) <= 0.05:
print("Warning: example score and baseline score are too close.")
print("You might not get attributions.")
else:
passed_test += 1
print("Sanity Check 1: Passed")
print(passed_test, " out of ", total_test, " sanity checks passed.")
i = 0
for explanation in response.explanations:
try:
prediction = np.max(response.predictions[i]["scores"])
except TypeError:
prediction = np.max(response.predictions[i])
sanity_check_explanations(explanation, prediction)
i += 1
"""
Explanation: Sanity check
In the function below you perform a sanity check on the explanations.
End of explanation
"""
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
"""
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
"""
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
khalido/nd101
|
sentiment_network/Sentiment Classification - Project 3 Solution.ipynb
|
gpl-3.0
|
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory
End of explanation
"""
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
"""
Explanation: Project 1: Quick Theory Validation
End of explanation
"""
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
"""
Explanation: Transforming Text into Numbers
End of explanation
"""
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
list(vocab)
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
from IPython.display import Image
Image(filename='sentiment_network.png')
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
"""
Explanation: Project 2: Creating the Input/Output Data
End of explanation
"""
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
# set our random number generator
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
"""
Explanation: Project 3: Building a Neural Network
Start with your neural network from the last chapter
3 layer neural network
no non-linearity in hidden layer
use our functions to create the training data
create a "pre_process_data" function to create vocabulary for our training data generating functions
modify "train" to train over the entire corpus
Where to Get Help if You Need it
Re-watch previous week's Udacity Lectures
Chapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)
End of explanation
"""
|
xesscorp/myhdlpeek
|
examples/peeker_tables.ipynb
|
mit
|
from myhdl import *
from myhdlpeek import Peeker # Import the myhdlpeeker module.
def mux(z, a, b, sel):
"""A simple multiplexer."""
@always_comb
def mux_logic():
if sel == 1:
z.next = a # Signal a sent to mux output when sel is high.
else:
z.next = b # Signal b sent to mux output when sel is low.
return mux_logic
# Create some signals to attach to the multiplexer.
a, b, z = [Signal(0) for _ in range(3)] # Integer signals for the inputs & output.
sel = Signal(intbv(0)[1:]) # Binary signal for the selector.
# Create some Peekers to monitor the multiplexer I/Os.
Peeker.clear() # Clear any existing Peekers. (Start with a clean slate.)
Peeker(a, 'a') # Add a Peeker to the a input.
Peeker(b, 'b') # Add a Peeker to the b input.
Peeker(z, 'z') # Add a peeker to the z output.
Peeker(sel, 'select') # Add a Peeker to the select input. The Peeker label doesn't have to match the signal name.
# Instantiate mux.
mux_1 = mux(z, a, b, sel)
# Apply random patterns to the multiplexer.
from random import randrange
def test():
for _ in range(8):
a.next, b.next, sel.next = randrange(8), randrange(8), randrange(2)
yield delay(1)
# Simulate the multiplexer, testbed and the peekers.
sim = Simulation(mux_1, test(), *Peeker.instances()).run()
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Tabular-Output" data-toc-modified-id="Tabular-Output-1"><span class="toc-item-num">1 </span>Tabular Output</a></span></li></ul></div>
Tabular Output
In addition to waveforms, myhdlpeek also lets you display the captured traces as tables.
To demonstrate, I'll use our old friend the multiplexer:
End of explanation
"""
Peeker.to_html_table()
"""
Explanation: Once the simulation has run, I can display the results using a table instead of waveforms:
End of explanation
"""
Peeker.to_html_table('select a b z', start_time=3) # Select and change order of signals, and set start time.
"""
Explanation: I can use the same options for tabular output that are available for showing waveforms:
End of explanation
"""
Peeker.to_text_table('select a b z')
"""
Explanation: There's even a version for use in console mode (outside of the Jupyter environment):
End of explanation
"""
|
mikekestemont/wuerzb15
|
Chapter 2 - Stepping up with SciPy.ipynb
|
mit
|
import scipy as sp
"""
Explanation: Chapter 2 - Stepping up with SciPy
Numpy is a powerful, yet very basic library, which can be a little abstract to introduce -- and a little tedious to practice. To perform more interesting things to Numpy matrices, we now turn to a number of interesting libraries, which have been built around numpy, or which were designed to interact closely with it.
Clustering with Scipy
SciPy stands for 'Scientific Python': as its name suggests, this library extends Numpy's raw number crunching capabilities, with interesting scientific functionality, including the calculation of distances between vectors, or common statistical tests. Scipy is commonly imported under the name sp:
End of explanation
"""
import pickle
titles, authors, words, X = pickle.load(open("dummy.p", "rb"))
"""
Explanation: Loading the data
It is time to get practical! In the data directory in the repository for this course, I have included a corpus representing novels by three famous British authors from the Victorian era: Jane Austen, Charles Dickens, and William Thackeray. In the next code block, I load these texts and turn them into a vectorized matrix. You can simple execute the code block and ignore it for the time being. In the next chapter, we will deep deeper into the topic of vectorization.
End of explanation
"""
print('This dummy corpus holds:')
for title, author in zip(titles, authors):
print('\t-', title, 'by', author)
"""
Explanation: As you can see we loaded a list of titles, authors, words and a frequency table - which is named capital X. These lists are perfectly matched: the authors and titles can for instance be easily zipped together:
End of explanation
"""
print(X.shape)
"""
Explanation: The X matrix which we loaded has frequency information for these texts, concerning the 100 most frequenct words in the texts. Each column in X corresponds to the relative frequencies for a particular word:
End of explanation
"""
idx_my = words.index('my')
freqs_my = X[:, idx_my]
print(freqs_my)
"""
Explanation: The lists of words matches the names of the columns in our frequency table. To select the frequencies for the pronoun 'my' in each text, we could therefore do:
End of explanation
"""
import pandas as pd
"""
Explanation: If you are interested in getting a version of this matrix which is easier to deal with, pandas is an interesting library. Basically, it wraps a lot of functionality around numpy matrixes, and makes it easier to access, for instance, columns using actual names, instead of less intuitive indices. Thus, it brings a lot of functionality to Python which you might know from e.g. R. Pandas is imported as pd conventionally:
End of explanation
"""
df = pd.DataFrame(X, columns=words, index=titles)
"""
Explanation: To turn X into a pandas DataFrame (which is the most important object in pandas), we could do this:
End of explanation
"""
df['my']
"""
Explanation: This command will construct a nice table out of our data matrix, which can be easily indexed. The example with 'my' above:
End of explanation
"""
df.to_latex()[:1000]
"""
Explanation: One very nice property of pandas, it that it can be easily used to move around data in a variety of formats (which is what I mainly use it for). Creating a LaTeX representation of this matrix, for instance, is super-easy:
End of explanation
"""
from scipy.spatial.distance import pdist, squareform
"""
Explanation: Saving and writing data is also possible for a whole bunch of other formats, including Excel, csv, etc.
Clustering
One common methodology in stylometry is clustering: by drawing a tree diagram or 'dendrogram', representing the relationships between the texts in a corpus, we attempt to visualize the main stylistic structure in our data. Texts that cluster together under a similar branch in the resulting diagram, can be argued to be stylistically closer to each other, than texts which occupy completely different places in the tree. Texts by the same authors, for instance, will often form thight clades in the tree, because they are written in a similar style.
Clustering algorithms are based on the distances between texts: clustering algorithms typically start by calculating the distance between each pair of texts in a corpus, so that know for each text how (dis)similar it is from any other text. Only after these distances have been fully calculated, we have the clustering algorithm start building a tree representation, in which the similar texts are joined together and merged into new nodes. To create a distance matrix, scipy offers the convenient functions pdist and squareform, which can be used to calculate the pairwise distances between all the rows in a matrix (i.e. all the texts in a corpus, in our case):
End of explanation
"""
dm = squareform(pdist(X, 'cityblock'))
print(dm.shape)
"""
Explanation: We can now run this function on our corpus. To obtain a nice and clean matrix, we apply squareform() to pdist()'s: result: like that, we obtain a matrix which has a row as well as a column for each of our original texts. This representation is a bit superfluous, because matrix[i][j] will be identical to matrix[j][i]. This is because most distance metrics in stylometry are symmetric (such as the cityblock or Manhattan distance used below): the distance of document B to document A is equal to the distance from document A to document B.
End of explanation
"""
print(dm[3][3])
print(dm[8][8])
"""
Explanation: As is clear from the shape info, we have obtained a 9 by 9 matrix, which holds the distance between each pair of texts. Note that the distance from a text to itself is of course zero:
End of explanation
"""
print(dm[2][3])
print(dm[3][2])
"""
Explanation: Additionally, the distance from text A to text B, is equal to the distance from B to A:
End of explanation
"""
from scipy.cluster.hierarchy import linkage
linkage_object = linkage(dm, method='ward')
"""
Explanation: To be able to visualize a dendrogram, we must first take care of the linkages in the tree: this procedure will start by merging ('linking') the most similar texts in the corpus into a new mode; only at a later stage in the tree, these nodes of very similar texts will be joined together with nodes representing other texts. We perform this - fairly abstract - step on our distance matrix as follows:
End of explanation
"""
%matplotlib inline
"""
Explanation: Here, we specify that we wish to use Ward's linkage method, which is one of the most common linkage functions in stylometry. We are now ready to draw the actual dendrogram. To make sure that our plots are properly displayed in the notebook, we must first execute this line:
End of explanation
"""
from scipy.cluster.hierarchy import dendrogram
linkage_object = linkage(dm, method='ward')
d = dendrogram(Z=linkage_object, labels=titles, orientation='right')
"""
Explanation: We can now draw our dendrogram. Note that we annotate the outer leaf nodes in our tree (i.e. the texts) using the labels argument. With the orientation argument, we make sure that our dendrogram can be easily read:
End of explanation
"""
from scipy.cluster.hierarchy import dendrogram
linkage_object = linkage(dm, method='ward')
d = dendrogram(Z=linkage_object, labels=authors, orientation='right')
"""
Explanation: Using the authors as labels is of course also a good idea:
End of explanation
"""
dm = squareform(pdist(X, 'euclidean'))
linkage_object = linkage(dm, method='ward')
d = dendrogram(Z=linkage_object, labels=authors, orientation='right')
"""
Explanation: As we can see, Jane Austen's novels form a tight and distinctive cloud; apparantly Dickens and Thackeray are more difficult to tell apart. The actual distance between nodes is hinted at on the horizontal length of the branches (i.e. the values on the x-axis in this plot). The previous code blocks used the Manhattan city block distance, a very simple distance metric which is also used in the calculation of Burrows's Delta. Note that we can easily switch to, for instance, the Euclidean distance:
End of explanation
"""
import seaborn as sns
"""
Explanation: Matplotlib is still the standard plotting library for Python -- and it is in fact the one which is used to produce the dendrograms above. Nevertheless, it is not particularly aesthetically pleasing. One interesting alternative which has recently surfaced is seaborn: it is in fact not a replacement for matplotlib but rather a better styled version of Matplotlib. It is imported as sns by convention (make sure that you install it first):
End of explanation
"""
import pandas as pd
df = pd.DataFrame(dm, columns=titles, index=titles)
cm = sns.clustermap(df)
"""
Explanation: Interestingly, seaborn comes with a series of interesting visualization options. One option which I have used in the recent past, is its clustermap(). When passing it a data set (such as our distance matrix dm above), it will draw a heatmap, and then annotate the axis with cluster tree. In the following code block, we show how we can use seaborn to create such a clustermap. Note that we first convert the distance matrix into a pandas DataFrame.
End of explanation
"""
test_vector = X[0]
X_train = X[1:]
print(test_vector.shape)
print(X_train.shape)
"""
Explanation: This clustermap offers an excellent visualization of the stylistic structure in our data. Apart from the cluster trees which we already obtained, the lighter areas in the heatmap intuitively point to specific text combinations that display a lot of stylistic affinity.
A simple Burrows's Delta in SciPy
SciPy is flexible enough to be useful for a variety of analyses. One interesting application which is easy to implement with SciPy is Burrows's Delta, a well-known attribution method in stylometry. We select a single text as an 'unknown' test text (test_vector), which we will try to attribute to one of the authors in our corpus. We therefore split our data:
End of explanation
"""
from scipy.spatial.distance import cityblock as manhattan
"""
Explanation: Thus, we obtain a single test vector, consisting of 30 features, and a 8 by 30 matrix of training texts: i.e. the texts of our 'known' authors. We can now calculate to which training text our unknown test text is closest, using the Manhattan distance implemented in scipy:
End of explanation
"""
print(manhattan(test_vector, X_train[3]))
"""
Explanation: This metric can be used to calculate the cityblock distance between any two vectors, as follows:
End of explanation
"""
d = 0.0
for a, b in zip(test_vector, X_train[3]):
d += abs(a-b)
print('Distance: ', d)
"""
Explanation: We could have easily coded this ourselves:
End of explanation
"""
dists = []
for v in X_train:
dists.append(manhattan(test_vector, v))
print(dists)
"""
Explanation: We now proceed to calculate the distance between our anonymous text and all texts in our training set:
End of explanation
"""
dists = [manhattan(test_vector, v) for v in X_train]
print(dists)
"""
Explanation: Or with a list comprehension:
End of explanation
"""
import numpy as np
nearest_dist = np.min(dists)
print('Smallest distance: ', nearest_dist)
"""
Explanation: As you can see, this yields a list of 8 values: the respective distances between our test_vector and all training items. Now, we can use some convenient numpy functions, to find out which training texts shows the minimal distance to our anonymous text:
End of explanation
"""
largest_dist = np.max(dists)
print('Largest distance: ', largest_dist)
"""
Explanation: Apparently, the smallest distance we obtain is close to 1.04. (Note how numpy automatically casts our list of distances to an array.) As to the largest distance:
End of explanation
"""
nn_idx = np.argmin(dists) # index of the nearest neighbour
print('Index of nearest neighbour: ', nn_idx)
"""
Explanation: At this point, however, we still don't know to which exact training item our test item is closest. For this purpose, we can the use the argmin() function. This will not return the actual minimal distance, but rather the index at which the smallest value can be found:
End of explanation
"""
print('Closest text:', titles[nn_idx])
print('Closest author:', authors[nn_idx])
"""
Explanation: Apparently, the test vector's nearest neighbour can be found in first position; this is in fact a text by the author we where looking for:
End of explanation
"""
from scipy.spatial.distance import cdist
"""
Explanation: With pdist('cityblock'), we can calculate the pairwise distances between all rows in a single matrix; using sp.spatial.distance.cityblock(), we can calculate the same Manhattan distance between two vectors. What can we do when we would like to calculate the distances between all the rows in one matrix, to all the rows in another matrix? This is a valid question, since often we would like to attribute more than one anonymous text to a series of candidate authors. This is where scipy's cdist() function comes in handy:
End of explanation
"""
X_test = [test_vector, test_vector]
"""
Explanation: Let us mimick that our test_vector is in fact a list of anonymous texts:
End of explanation
"""
dists = cdist(X_test, X_train, metric='cityblock')
print(dists)
print(dists.shape)
"""
Explanation: Using cdist() we can now calculate the distance between all our test items and all our train items:
End of explanation
"""
np.argmin(dists, axis=1)
"""
Explanation: As you can see, we now obtain a 2 x 8 matrix, which holds for every test texts (first dimension), the distance to all training items (second dimension). To find the training indices which minize the distance for each test item we can again use argmin(). Note that here we have to specify axis=1, because we are interested in the minima in the second dimension:
End of explanation
"""
|
pgmpy/pgmpy_notebook
|
notebooks/2. Bayesian Networks.ipynb
|
mit
|
from IPython.display import Image
"""
Explanation: Bayesian Network
End of explanation
"""
Image('../images/2/student_full_param.png')
"""
Explanation: Bayesian Models
What are Bayesian Models
Independencies in Bayesian Networks
How is Bayesian Model encoding the Joint Distribution
How we do inference from Bayesian models
Types of methods for inference
1. What are Bayesian Models
A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are mostly used when we want to represent causal relationship between the random variables. Bayesian Networks are parameterized using Conditional Probability Distributions (CPD). Each node in the network is parameterized using $P(node | Pa(node))$ where $Pa(node)$ represents the parents of node in the network.
We can take the example of the student model:
End of explanation
"""
from pgmpy.models import BayesianModel
from pgmpy.factors.discrete import TabularCPD
# Defining the model structure. We can define the network by just passing a list of edges.
model = BayesianModel([('D', 'G'), ('I', 'G'), ('G', 'L'), ('I', 'S')])
# Defining individual CPDs.
cpd_d = TabularCPD(variable='D', variable_card=2, values=[[0.6], [0.4]])
cpd_i = TabularCPD(variable='I', variable_card=2, values=[[0.7], [0.3]])
# The representation of CPD in pgmpy is a bit different than the CPD shown in the above picture. In pgmpy the colums
# are the evidences and rows are the states of the variable. So the grade CPD is represented like this:
#
# +---------+---------+---------+---------+---------+
# | diff | intel_0 | intel_0 | intel_1 | intel_1 |
# +---------+---------+---------+---------+---------+
# | intel | diff_0 | diff_1 | diff_0 | diff_1 |
# +---------+---------+---------+---------+---------+
# | grade_0 | 0.3 | 0.05 | 0.9 | 0.5 |
# +---------+---------+---------+---------+---------+
# | grade_1 | 0.4 | 0.25 | 0.08 | 0.3 |
# +---------+---------+---------+---------+---------+
# | grade_2 | 0.3 | 0.7 | 0.02 | 0.2 |
# +---------+---------+---------+---------+---------+
cpd_g = TabularCPD(variable='G', variable_card=3,
values=[[0.3, 0.05, 0.9, 0.5],
[0.4, 0.25, 0.08, 0.3],
[0.3, 0.7, 0.02, 0.2]],
evidence=['I', 'D'],
evidence_card=[2, 2])
cpd_l = TabularCPD(variable='L', variable_card=2,
values=[[0.1, 0.4, 0.99],
[0.9, 0.6, 0.01]],
evidence=['G'],
evidence_card=[3])
cpd_s = TabularCPD(variable='S', variable_card=2,
values=[[0.95, 0.2],
[0.05, 0.8]],
evidence=['I'],
evidence_card=[2])
# Associating the CPDs with the network
model.add_cpds(cpd_d, cpd_i, cpd_g, cpd_l, cpd_s)
# check_model checks for the network structure and CPDs and verifies that the CPDs are correctly
# defined and sum to 1.
model.check_model()
# CPDs can also be defined using the state names of the variables. If the state names are not provided
# like in the previous example, pgmpy will automatically assign names as: 0, 1, 2, ....
cpd_d_sn = TabularCPD(variable='D', variable_card=2, values=[[0.6], [0.4]], state_names={'D': ['Easy', 'Hard']})
cpd_i_sn = TabularCPD(variable='I', variable_card=2, values=[[0.7], [0.3]], state_names={'I': ['Dumb', 'Intelligent']})
cpd_g_sn = TabularCPD(variable='G', variable_card=3,
values=[[0.3, 0.05, 0.9, 0.5],
[0.4, 0.25, 0.08, 0.3],
[0.3, 0.7, 0.02, 0.2]],
evidence=['I', 'D'],
evidence_card=[2, 2],
state_names={'G': ['A', 'B', 'C'],
'I': ['Dumb', 'Intelligent'],
'D': ['Easy', 'Hard']})
cpd_l_sn = TabularCPD(variable='L', variable_card=2,
values=[[0.1, 0.4, 0.99],
[0.9, 0.6, 0.01]],
evidence=['G'],
evidence_card=[3],
state_names={'L': ['Bad', 'Good'],
'G': ['A', 'B', 'C']})
cpd_s_sn = TabularCPD(variable='S', variable_card=2,
values=[[0.95, 0.2],
[0.05, 0.8]],
evidence=['I'],
evidence_card=[2],
state_names={'S': ['Bad', 'Good'],
'I': ['Dumb', 'Intelligent']})
# These defined CPDs can be added to the model. Since, the model already has CPDs associated to variables, it will
# show warning that pmgpy is now replacing those CPDs with the new ones.
model.add_cpds(cpd_d_sn, cpd_i_sn, cpd_g_sn, cpd_l_sn, cpd_s_sn)
model.check_model()
# We can now call some methods on the BayesianModel object.
model.get_cpds()
# Printing a CPD which doesn't have state names defined.
print(cpd_g)
# Printing a CPD with it's state names defined.
print(model.get_cpds('G'))
model.get_cardinality('G')
"""
Explanation: In pgmpy we define the network structure and the CPDs separately and then associate them with the structure. Here's an example for defining the above model:
End of explanation
"""
Image('../images/2/two_nodes.png')
"""
Explanation: 2. Independencies in Bayesian Networks
Independencies implied by the network structure of a Bayesian Network can be categorized in 2 types:
Local Independencies: Any variable in the network is independent of its non-descendents given its parents. Mathematically it can be written as: $$ (X \perp NonDesc(X) | Pa(X) $$
where $NonDesc(X)$ is the set of variables which are not descendents of $X$ and $Pa(X)$ is the set of variables which are parents of $X$.
Global Independencies: For discussing global independencies in Bayesian Networks we need to look at the various network structures possible.
Starting with the case of 2 nodes, there are only 2 possible ways for it to be connected:
End of explanation
"""
Image('../images/2/three_nodes.png')
"""
Explanation: In the above two cases it is fairly obvious that change in any of the node will affect the other. For the first case we can take the example of $difficulty \rightarrow grade$. If we increase the difficulty of the course the probability of getting a higher grade decreases. For the second case we can take the example of $SAT \leftarrow Intel$. Now if we increase the probability of getting a good score in SAT that would imply that the student is intelligent, hence increasing the probability of $i_1$. Therefore in both the cases shown above any change in the variables leads to change in the other variable.
Now, there are four possible ways of connection between 3 nodes:
End of explanation
"""
# Getting the local independencies of a variable.
model.local_independencies('G')
# Getting all the local independencies in the network.
model.local_independencies(['D', 'I', 'S', 'G', 'L'])
# Active trail: For any two variables A and B in a network if any change in A influences the values of B then we say
# that there is an active trail between A and B.
# In pgmpy active_trail_nodes gives a set of nodes which are affected (i.e. correlated) by any
# change in the node passed in the argument.
model.active_trail_nodes('D')
model.active_trail_nodes('D', observed='G')
"""
Explanation: Now in the above cases we will see the flow of influence from $A$ to $C$ under various cases.
Causal: In the general case when we make any changes in the variable $A$, it will have effect of variable $B$ (as we discussed above) and this change in $B$ will change the values in $C$. One other possible case can be when $B$ is observed i.e. we know the value of $B$. So, in this case any change in $A$ won't affect $B$ since we already know the value. And hence there won't be any change in $C$ as it depends only on $B$. Mathematically we can say that: $(A \perp C | B)$.
Evidential: Similarly in this case also observing $B$ renders $C$ independent of $A$. Otherwise when $B$ is not observed the influence flows from $A$ to $C$. Hence $(A \perp C | B)$.
Common Evidence: This case is a bit different from the others. When $B$ is not observed any change in $A$ reflects some change in $B$ but not in $C$. Let's take the example of $D \rightarrow G \leftarrow I$. In this case if we increase the difficulty of the course the probability of getting a higher grade reduces but this has no effect on the intelligence of the student. But when $B$ is observed let's say that the student got a good grade. Now if we increase the difficulty of the course this will increase the probability of the student to be intelligent since we already know that he got a good grade. Hence in this case $(A \perp C)$ and $( A \not\perp C | B)$. This structure is also commonly known as V structure.
Common Cause: The influence flows from $A$ to $C$ when $B$ is not observed. But when $B$ is observed and change in $A$ doesn't affect $C$ since it's only dependent on $B$. Hence here also $( A \perp C | B)$.
Let's not see a few examples for finding the independencies in a newtork using pgmpy:
End of explanation
"""
from pgmpy.inference import VariableElimination
infer = VariableElimination(model)
g_dist = infer.query(['G'])
print(g_dist)
"""
Explanation: 3. How is this Bayesian Network representing the Joint Distribution over the variables ?
Till now we just have been considering that the Bayesian Network can represent the Joint Distribution without any proof. Now let's see how to compute the Joint Distribution from the Bayesian Network.
From the chain rule of probabiliy we know that:
$P(A, B) = P(A | B) * P(B)$
Now in this case:
$P(D, I, G, L, S) = P(L| S, G, D, I) * P(S | G, D, I) * P(G | D, I) * P(D | I) * P(I)$
Applying the local independence conditions in the above equation we will get:
$P(D, I, G, L, S) = P(L|G) * P(S|I) * P(G| D, I) * P(D) * P(I)$
From the above equation we can clearly see that the Joint Distribution over all the variables is just the product of all the CPDs in the network. Hence encoding the independencies in the Joint Distribution in a graph structure helped us in reducing the number of parameters that we need to store.
4. Inference in Bayesian Models
Till now we discussed just about representing Bayesian Networks. Now let's see how we can do inference in a Bayesian Model and use it to predict values over new data points for machine learning tasks. In this section we will consider that we already have our model. We will talk about constructing the models from data in later parts of this tutorial.
In inference we try to answer probability queries over the network given some other variables. So, we might want to know the probable grade of an intelligent student in a difficult class given that he scored good in SAT. So for computing these values from a Joint Distribution we will have to reduce over the given variables that is $I = 1$, $D = 1$, $S = 1$ and then marginalize over the other variables that is $L$ to get $P(G | I=1, D=1, S=1)$.
But carrying on marginalize and reduce operation on the complete Joint Distribution is computationaly expensive since we need to iterate over the whole table for each operation and the table is exponential is size to the number of variables. But in Graphical Models we exploit the independencies to break these operations in smaller parts making it much faster.
One of the very basic methods of inference in Graphical Models is Variable Elimination.
Variable Elimination
We know that:
$P(D, I, G, L, S) = P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I)$
Now let's say we just want to compute the probability of G. For that we will need to marginalize over all the other variables.
$P(G) = \sum_{D, I, L, S} P(D, I, G, L, S)$
$P(G) = \sum_{D, I, L, S} P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I)$
$P(G) = \sum_D \sum_I \sum_L \sum_S P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I)$
Now since not all the conditional distributions depend on all the variables we can push the summations inside:
$P(G) = \sum_D \sum_I \sum_L \sum_S P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I)$
$P(G) = \sum_D P(D) \sum_I P(G|D, I) * P(I) \sum_S P(S|I) \sum_L P(L|G)$
So, by pushing the summations inside we have saved a lot of computation because we have to now iterate over much smaller tables.
Let's take an example for inference using Variable Elimination in pgmpy:
End of explanation
"""
print(infer.query(['G'], evidence={'D': 'Easy', 'I': 'Intelligent'}))
"""
Explanation: There can be cases in which we want to compute the conditional distribution let's say $P(G | D=0, I=1)$. In such cases we need to modify our equations a bit:
$P(G | D=0, I=1) = \sum_L \sum_S P(L|G) * P(S| I=1) * P(G| D=0, I=1) * P(D=0) * P(I=1)$
$P(G | D=0, I=1) = P(D=0) * P(I=1) * P(G | D=0, I=1) * \sum_L P(L | G) * \sum_S P(S | I=1)$
In pgmpy we will just need to pass an extra argument in the case of conditional distributions:
End of explanation
"""
infer.map_query(['G'])
infer.map_query(['G'], evidence={'D': 'Easy', 'I': 'Intelligent'})
infer.map_query(['G'], evidence={'D': 'Easy', 'I': 'Intelligent', 'L': 'Good', 'S': 'Good'})
"""
Explanation: Predicting values from new data points
Predicting values from new data points is quite similar to computing the conditional probabilities. We need to query for the variable that we need to predict given all the other features. The only difference is that rather than getting the probabilitiy distribution we are interested in getting the most probable state of the variable.
In pgmpy this is known as MAP query. Here's an example:
End of explanation
"""
|
bloomberg/bqplot
|
examples/Tutorials/Object Model.ipynb
|
apache-2.0
|
from bqplot import (
LinearScale,
Axis,
Figure,
OrdinalScale,
LinearScale,
Bars,
Lines,
Scatter,
)
# first, let's create two vectors x and y to plot using a Lines mark
import numpy as np
x = np.linspace(-10, 10, 100)
y = np.sin(x)
# 1. Create the scales
xs = LinearScale()
ys = LinearScale()
# 2. Create the axes for x and y
xax = Axis(scale=xs, label="X")
yax = Axis(scale=ys, orientation="vertical", label="Y")
# 3. Create a Lines mark by passing in the scales
# note that Lines object is stored in `line` which can be used later to update the plot
line = Lines(x=x, y=y, scales={"x": xs, "y": ys})
# 4. Create a Figure object by assembling marks and axes
fig = Figure(marks=[line], axes=[xax, yax], title="Simple Line Chart")
# 5. Render the figure using display or just as is
fig
"""
Explanation: Object Model
bqplot is based on Grammar of Graphics paradigm. The Object Model in bqplot gives the user the full flexibility to build custom plots. This means the API is verbose but fully customizable.
The following are the steps to build a Figure in bqplot using the Object Model:
Build the scales for x and y quantities using the Scale classes (Scales map the data into pixels in the figure)
Build the marks using the Mark classes. Marks represent the core plotting objects (lines, scatter, bars, pies etc.). Marks take the scale objects created in step 1 as arguments
Build the axes for x and y scales
Finally create a figure using Figure class. Figure takes marks and axes as inputs. Figure object is a widget (it inherits from DOMWidget) and can be rendered like any other jupyter widget
Let's look a simple example to understand these concepts:
End of explanation
"""
# first, let's create two vectors x and y to plot a bar chart
x = list("ABCDE")
y = np.random.rand(5)
# 1. Create the scales
xs = OrdinalScale() # note the use of ordinal scale to represent categorical data
ys = LinearScale()
# 2. Create the axes for x and y
xax = Axis(scale=xs, label="X", grid_lines="none") # no grid lines needed for x
yax = Axis(
scale=ys, orientation="vertical", label="Y", tick_format=".0%"
) # note the use of tick_format to format ticks
# 3. Create a Bars mark by passing in the scales
# note that Bars object is stored in `bar` object which can be used later to update the plot
bar = Bars(x=x, y=y, scales={"x": xs, "y": ys}, padding=0.2)
# 4. Create a Figure object by assembling marks and axes
Figure(marks=[bar], axes=[xax, yax], title="Simple Bar Chart")
"""
Explanation: For creating other marks (like scatter, pie, bars, etc.), only step 3 needs to be changed. Lets look a simple example to create a bar chart:
End of explanation
"""
# first, let's create two vectors x and y
import numpy as np
x = np.linspace(-10, 10, 25)
y = 3 * x + 5
y_noise = y + 10 * np.random.randn(25) # add some random noise to y
# 1. Create the scales
xs = LinearScale()
ys = LinearScale()
# 2. Create the axes for x and y
xax = Axis(scale=xs, label="X")
yax = Axis(scale=ys, orientation="vertical", label="Y")
# 3. Create a Lines and Scatter marks by passing in the scales
# additional attributes (stroke_width, colors etc.) can be passed as attributes to the mark objects as needed
line = Lines(x=x, y=y, scales={"x": xs, "y": ys}, colors=["green"], stroke_width=3)
scatter = Scatter(
x=x, y=y_noise, scales={"x": xs, "y": ys}, colors=["red"], stroke="black"
)
# 4. Create a Figure object by assembling marks and axes
# pass both the marks (line and scatter) as a list to the marks attribute
Figure(marks=[line, scatter], axes=[xax, yax], title="Scatter and Line")
"""
Explanation: Multiple marks can be rendered in a figure. It's as easy as passing a list of marks when constructing the Figure object
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.19/_downloads/defd59f40a19378fba659a70b6f1ec76/plot_sensors_decoding.ipynb
|
bsd-3-clause
|
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import (SlidingEstimator, GeneralizingEstimator, Scaler,
cross_val_multiscore, LinearModel, get_coef,
Vectorizer, CSP)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax = -0.200, 0.500
event_id = {'Auditory/Left': 1, 'Visual/Left': 3} # just use two
raw = mne.io.read_raw_fif(raw_fname, preload=True)
# The subsequent decoding analyses only capture evoked responses, so we can
# low-pass the MEG data. Usually a value more like 40 Hz would be used,
# but here low-pass at 20 so we can more heavily decimate, and allow
# the examlpe to run faster. The 2 Hz high-pass helps improve CSP.
raw.filter(2, 20)
events = mne.find_events(raw, 'STI 014')
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('grad', 'eog'), baseline=(None, 0.), preload=True,
reject=dict(grad=4000e-13, eog=150e-6), decim=10)
epochs.pick_types(meg=True, exclude='bads') # remove stim and EOG
X = epochs.get_data() # MEG signals: n_epochs, n_meg_channels, n_times
y = epochs.events[:, 2] # target: Audio left or right
"""
Explanation: ===============
Decoding (MVPA)
===============
:depth: 3
.. include:: ../../links.inc
Design philosophy
Decoding (a.k.a. MVPA) in MNE largely follows the machine
learning API of the scikit-learn package.
Each estimator implements fit, transform, fit_transform, and
(optionally) inverse_transform methods. For more details on this design,
visit scikit-learn_. For additional theoretical insights into the decoding
framework in MNE, see [1]_.
For ease of comprehension, we will denote instantiations of the class using
the same name as the class but in small caps instead of camel cases.
Let's start by loading data for a simple two-class problem:
End of explanation
"""
# Uses all MEG sensors and time points as separate classification
# features, so the resulting filters used are spatio-temporal
clf = make_pipeline(Scaler(epochs.info),
Vectorizer(),
LogisticRegression(solver='lbfgs'))
scores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
score = np.mean(scores, axis=0)
print('Spatio-temporal: %0.1f%%' % (100 * score,))
"""
Explanation: Transformation classes
Scaler
^^^^^^
The :class:mne.decoding.Scaler will standardize the data based on channel
scales. In the simplest modes scalings=None or scalings=dict(...),
each data channel type (e.g., mag, grad, eeg) is treated separately and
scaled by a constant. This is the approach used by e.g.,
:func:mne.compute_covariance to standardize channel scales.
If scalings='mean' or scalings='median', each channel is scaled using
empirical measures. Each channel is scaled independently by the mean and
standand deviation, or median and interquartile range, respectively, across
all epochs and time points during :class:~mne.decoding.Scaler.fit
(during training). The :meth:~mne.decoding.Scaler.transform method is
called to transform data (training or test set) by scaling all time points
and epochs on a channel-by-channel basis. To perform both the fit and
transform operations in a single call, the
:meth:~mne.decoding.Scaler.fit_transform method may be used. To invert the
transform, :meth:~mne.decoding.Scaler.inverse_transform can be used. For
scalings='median', scikit-learn_ version 0.17+ is required.
<div class="alert alert-info"><h4>Note</h4><p>Using this class is different from directly applying
:class:`sklearn.preprocessing.StandardScaler` or
:class:`sklearn.preprocessing.RobustScaler` offered by
scikit-learn_. These scale each *classification feature*, e.g.
each time point for each channel, with mean and standard
deviation computed across epochs, whereas
:class:`mne.decoding.Scaler` scales each *channel* using mean and
standard deviation computed across all of its time points
and epochs.</p></div>
Vectorizer
^^^^^^^^^^
Scikit-learn API provides functionality to chain transformers and estimators
by using :class:sklearn.pipeline.Pipeline. We can construct decoding
pipelines and perform cross-validation and grid-search. However scikit-learn
transformers and estimators generally expect 2D data
(n_samples * n_features), whereas MNE transformers typically output data
with a higher dimensionality
(e.g. n_samples * n_channels * n_frequencies * n_times). A Vectorizer
therefore needs to be applied between the MNE and the scikit-learn steps
like:
End of explanation
"""
csp = CSP(n_components=3, norm_trace=False)
clf = make_pipeline(csp, LogisticRegression(solver='lbfgs'))
scores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=1)
print('CSP: %0.1f%%' % (100 * scores.mean(),))
"""
Explanation: PSDEstimator
^^^^^^^^^^^^
The :class:mne.decoding.PSDEstimator
computes the power spectral density (PSD) using the multitaper
method. It takes a 3D array as input, converts it into 2D and computes the
PSD.
FilterEstimator
^^^^^^^^^^^^^^^
The :class:mne.decoding.FilterEstimator filters the 3D epochs data.
Spatial filters
Just like temporal filters, spatial filters provide weights to modify the
data along the sensor dimension. They are popular in the BCI community
because of their simplicity and ability to distinguish spatially-separated
neural activity.
Common spatial pattern
^^^^^^^^^^^^^^^^^^^^^^
:class:mne.decoding.CSP is a technique to analyze multichannel data based
on recordings from two classes [2]_ (see also
https://en.wikipedia.org/wiki/Common_spatial_pattern).
Let $X \in R^{C\times T}$ be a segment of data with
$C$ channels and $T$ time points. The data at a single time point
is denoted by $x(t)$ such that $X=[x(t), x(t+1), ..., x(t+T-1)]$.
Common spatial pattern (CSP) finds a decomposition that projects the signal
in the original sensor space to CSP space using the following transformation:
\begin{align}x_{CSP}(t) = W^{T}x(t)
:label: csp\end{align}
where each column of $W \in R^{C\times C}$ is a spatial filter and each
row of $x_{CSP}$ is a CSP component. The matrix $W$ is also
called the de-mixing matrix in other contexts. Let
$\Sigma^{+} \in R^{C\times C}$ and $\Sigma^{-} \in R^{C\times C}$
be the estimates of the covariance matrices of the two conditions.
CSP analysis is given by the simultaneous diagonalization of the two
covariance matrices
\begin{align}W^{T}\Sigma^{+}W = \lambda^{+}
:label: diagonalize_p\end{align}
\begin{align}W^{T}\Sigma^{-}W = \lambda^{-}
:label: diagonalize_n\end{align}
where $\lambda^{C}$ is a diagonal matrix whose entries are the
eigenvalues of the following generalized eigenvalue problem
\begin{align}\Sigma^{+}w = \lambda \Sigma^{-}w
:label: eigen_problem\end{align}
Large entries in the diagonal matrix corresponds to a spatial filter which
gives high variance in one class but low variance in the other. Thus, the
filter facilitates discrimination between the two classes.
.. topic:: Examples
* `sphx_glr_auto_examples_decoding_plot_decoding_csp_eeg.py`
* `sphx_glr_auto_examples_decoding_plot_decoding_csp_timefreq.py`
<div class="alert alert-info"><h4>Note</h4><p>The winning entry of the Grasp-and-lift EEG competition in Kaggle used
the :class:`~mne.decoding.CSP` implementation in MNE and was featured as
a `script of the week <sotw_>`_.</p></div>
We can use CSP with these data with:
End of explanation
"""
# Fit CSP on full data and plot
csp.fit(X, y)
csp.plot_patterns(epochs.info)
csp.plot_filters(epochs.info, scalings=1e-9)
"""
Explanation: Source power comodulation (SPoC)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Source Power Comodulation (:class:mne.decoding.SPoC) [3]_
identifies the composition of
orthogonal spatial filters that maximally correlate with a continuous target.
SPoC can be seen as an extension of the CSP where the target is driven by a
continuous variable rather than a discrete variable. Typical applications
include extraction of motor patterns using EMG power or audio patterns using
sound envelope.
.. topic:: Examples
* `sphx_glr_auto_examples_decoding_plot_decoding_spoc_CMC.py`
xDAWN
^^^^^
:class:mne.preprocessing.Xdawn is a spatial filtering method designed to
improve the signal to signal + noise ratio (SSNR) of the ERP responses [4]_.
Xdawn was originally
designed for P300 evoked potential by enhancing the target response with
respect to the non-target response. The implementation in MNE-Python is a
generalization to any type of ERP.
.. topic:: Examples
* `sphx_glr_auto_examples_preprocessing_plot_xdawn_denoising.py`
* `sphx_glr_auto_examples_decoding_plot_decoding_xdawn_eeg.py`
Effect-matched spatial filtering
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The result of :class:mne.decoding.EMS is a spatial filter at each time
point and a corresponding time course [5]_.
Intuitively, the result gives the similarity between the filter at
each time point and the data vector (sensors) at that time point.
.. topic:: Examples
* `sphx_glr_auto_examples_decoding_plot_ems_filtering.py`
Patterns vs. filters
^^^^^^^^^^^^^^^^^^^^
When interpreting the components of the CSP (or spatial filters in general),
it is often more intuitive to think about how $x(t)$ is composed of
the different CSP components $x_{CSP}(t)$. In other words, we can
rewrite Equation :eq:csp as follows:
\begin{align}x(t) = (W^{-1})^{T}x_{CSP}(t)
:label: patterns\end{align}
The columns of the matrix $(W^{-1})^T$ are called spatial patterns.
This is also called the mixing matrix. The example
sphx_glr_auto_examples_decoding_plot_linear_model_patterns.py
discusses the difference between patterns and filters.
These can be plotted with:
End of explanation
"""
# We will train the classifier on all left visual vs auditory trials on MEG
clf = make_pipeline(StandardScaler(), LogisticRegression(solver='lbfgs'))
time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)
scores = cross_val_multiscore(time_decod, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot
fig, ax = plt.subplots()
ax.plot(epochs.times, scores, label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC') # Area Under the Curve
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Sensor space decoding')
"""
Explanation: Decoding over time
This strategy consists in fitting a multivariate predictive model on each
time instant and evaluating its performance at the same instant on new
epochs. The :class:mne.decoding.SlidingEstimator will take as input a
pair of features $X$ and targets $y$, where $X$ has
more than 2 dimensions. For decoding over time the data $X$
is the epochs data of shape n_epochs x n_channels x n_times. As the
last dimension of $X$ is the time, an estimator will be fit
on every time instant.
This approach is analogous to SlidingEstimator-based approaches in fMRI,
where here we are interested in when one can discriminate experimental
conditions and therefore figure out when the effect of interest happens.
When working with linear models as estimators, this approach boils
down to estimating a discriminative spatial filter for each time instant.
Temporal decoding
^^^^^^^^^^^^^^^^^
We'll use a Logistic Regression for a binary classification as machine
learning model.
End of explanation
"""
clf = make_pipeline(StandardScaler(),
LinearModel(LogisticRegression(solver='lbfgs')))
time_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)
time_decod.fit(X, y)
coef = get_coef(time_decod, 'patterns_', inverse_transform=True)
evoked = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0])
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
evoked.plot_joint(times=np.arange(0., .500, .100), title='patterns',
**joint_kwargs)
"""
Explanation: You can retrieve the spatial filters and spatial patterns if you explicitly
use a LinearModel
End of explanation
"""
# define the Temporal generalization object
time_gen = GeneralizingEstimator(clf, n_jobs=1, scoring='roc_auc',
verbose=True)
scores = cross_val_multiscore(time_gen, X, y, cv=5, n_jobs=1)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot the diagonal (it's exactly the same as the time-by-time decoding above)
fig, ax = plt.subplots()
ax.plot(epochs.times, np.diag(scores), label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC')
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Decoding MEG sensors over time')
"""
Explanation: Temporal generalization
^^^^^^^^^^^^^^^^^^^^^^^
Temporal generalization is an extension of the decoding over time approach.
It consists in evaluating whether the model estimated at a particular
time instant accurately predicts any other time instant. It is analogous to
transferring a trained model to a distinct learning problem, where the
problems correspond to decoding the patterns of brain activity recorded at
distinct time instants.
The object to for Temporal generalization is
:class:mne.decoding.GeneralizingEstimator. It expects as input $X$
and $y$ (similarly to :class:~mne.decoding.SlidingEstimator) but
generates predictions from each model for all time instants. The class
:class:~mne.decoding.GeneralizingEstimator is generic and will treat the
last dimension as the one to be used for generalization testing. For
convenience, here, we refer to it as different tasks. If $X$
corresponds to epochs data then the last dimension is time.
This runs the analysis used in [6] and further detailed in [7]:
End of explanation
"""
fig, ax = plt.subplots(1, 1)
im = ax.imshow(scores, interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=epochs.times[[0, -1, 0, -1]], vmin=0., vmax=1.)
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Temporal generalization')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
"""
Explanation: Plot the full (generalization) matrix:
End of explanation
"""
|
spacecowboy/article-annriskgroups-source
|
AnnGroups.ipynb
|
gpl-3.0
|
# import stuffs
%matplotlib inline
import numpy as np
import pandas as pd
from pyplotthemes import get_savefig, classictheme as plt
plt.latex = True
"""
Explanation: AnnGroups
This is just a test script to verify that the ANN code works as expected. It also serves as an example
for the usage.
It is NOT used for results reported in the article.
End of explanation
"""
from datasets import get_pbc
d = get_pbc(prints=True, norm_in=True, norm_out=False)
durcol = d.columns[0]
eventcol = d.columns[1]
if np.any(d[durcol] < 0):
raise ValueError("Negative times encountered")
# Sort the data before training - handled by ensemble
#d.sort(d.columns[0], inplace=True)
# Example: d.iloc[:, :2] for times, events
d
"""
Explanation: Load some data
End of explanation
"""
import ann
from classensemble import ClassEnsemble
mingroup = int(0.25 * d.shape[0])
def get_net(func=ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN):
hidden_count = 10
outcount = 2
l = (d.shape[1] - 2) + hidden_count + outcount + 1
net = ann.geneticnetwork((d.shape[1] - 2), hidden_count, outcount)
net.fitness_function = func
net.mingroup = mingroup
# Be explicit here even though I changed the defaults
net.connection_mutation_chance = 0.0
net.activation_mutation_chance = 0
# Some other values
net.crossover_method = net.CROSSOVER_UNIFORM
net.selection_method = net.SELECTION_TOURNAMENT
net.population_size = 100
net.generations = 1000
net.weight_mutation_chance = 0.15
net.dropout_hidden_probability = 0.5
net.dropout_input_probability = 0.8
ann.utils.connect_feedforward(net, [5, 5], hidden_act=net.TANH, out_act=net.SOFTMAX)
#c = net.connections.reshape((l, l))
#c[-outcount:, :((d.shape[1] - 2) + hidden_count)] = 1
#net.connections = c.ravel()
return net
net = get_net()
l = (d.shape[1] - 2) + net.hidden_count + 2 + 1
print(net.connections.reshape((l, l)))
hnets = []
lnets = []
netcount = 2
for i in range(netcount):
if i % 2:
n = get_net(ann.geneticnetwork.FITNESS_SURV_KAPLAN_MIN)
hnets.append(n)
else:
n = get_net(ann.geneticnetwork.FITNESS_SURV_KAPLAN_MAX)
lnets.append(n)
e = ClassEnsemble(hnets, lnets)
"""
Explanation: Create an ANN model
With all correct parameters, ensemble settings and such.
End of explanation
"""
e.fit(d, durcol, eventcol)
# grouplabels = e.predict_classes
grouplabels, mems = e.label_data(d)
for l, m in mems.items():
print("Group", l, "has", len(m), "members")
"""
Explanation: Train the ANNs
And print groupings on training data.
End of explanation
"""
from lifelines.plotting import add_at_risk_counts
from lifelines.estimation import KaplanMeierFitter
from lifelines.estimation import median_survival_times
plt.figure()
fitters = []
for g in ['high', 'mid', 'low']:
kmf = KaplanMeierFitter()
fitters.append(kmf)
members = grouplabels == g
kmf.fit(d.loc[members, durcol],
d.loc[members, eventcol],
label='{}'.format(g))
kmf.plot(ax=plt.gca())#, color=plt.colors[mi])
print("End survival rate for", g, ":",kmf.survival_function_.iloc[-1, 0])
if kmf.survival_function_.iloc[-1, 0] <= 0.5:
print("Median survival for", g, ":",
median_survival_times(kmf.survival_function_))
plt.legend(loc='best', framealpha=0.1)
plt.ylim((0, 1))
add_at_risk_counts(*fitters)
"""
Explanation: Plot grouping
End of explanation
"""
|
tensorflow/docs-l10n
|
site/en-snapshot/io/tutorials/bigtable.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install tensorflow-io
"""
Explanation: Title
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/io/tutorials/bigtable"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/io/blob/master/docs/tutorials/bigtable.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/io/blob/master/docs/tutorials/bigtable.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/bigtable.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
This notebook represents the basic usage and features of the tensorflow_io.bigtable module. Make sure you are familiar with these topics before continuing:
Creating a GCP project.
Installing the Cloud SDK for Bigtable
cbt tool overview
Using the emulator
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Setup
End of explanation
"""
!mkdir /tools/google-cloud-sdk/.install
!gcloud --quiet components install beta cbt bigtable
!gcloud init
"""
Explanation: Note: When executing the cell below, you will be asked to log in to google cloud.
End of explanation
"""
import os
import subprocess
_emulator = subprocess.Popen(['/tools/google-cloud-sdk/bin/gcloud', 'beta', 'emulators', 'bigtable', 'start', '--host-port=127.0.0.1:8086'],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL, bufsize=0)
"""
Explanation: For the sake of this example, the bigtable emulator is used. If you have your bigtable instance set up and populated with values, skip these steps and go straight to the Quickstart section.
Start the emulator in the background.
End of explanation
"""
%env BIGTABLE_EMULATOR_HOST=127.0.0.1:8086
!cbt -project "test-project" -instance "test-instance" createtable t1 families=cf1 splits=row-a,row-h,row-p,row-z
!cbt -project "test-project" -instance "test-instance" ls
"""
Explanation: Create a table
End of explanation
"""
!cbt -project "test-project" -instance "test-instance" set t1 row-a cf1:c1=A
!cbt -project "test-project" -instance "test-instance" set t1 row-b cf1:c1=B
!cbt -project "test-project" -instance "test-instance" set t1 row-c cf1:c1=C
!cbt -project "test-project" -instance "test-instance" set t1 row-d cf1:c1=D
!cbt -project "test-project" -instance "test-instance" set t1 row-e cf1:c1=E
!cbt -project "test-project" -instance "test-instance" set t1 row-f cf1:c1=F
!cbt -project "test-project" -instance "test-instance" set t1 row-g cf1:c1=G
!cbt -project "test-project" -instance "test-instance" set t1 row-h cf1:c1=H
!cbt -project "test-project" -instance "test-instance" set t1 row-i cf1:c1=I
!cbt -project "test-project" -instance "test-instance" set t1 row-j cf1:c1=J
!cbt -project "test-project" -instance "test-instance" set t1 row-k cf1:c1=K
!cbt -project "test-project" -instance "test-instance" set t1 row-l cf1:c1=L
!cbt -project "test-project" -instance "test-instance" set t1 row-m cf1:c1=M
!cbt -project "test-project" -instance "test-instance" set t1 row-n cf1:c1=N
!cbt -project "test-project" -instance "test-instance" set t1 row-o cf1:c1=O
!cbt -project "test-project" -instance "test-instance" set t1 row-p cf1:c1=P
!cbt -project "test-project" -instance "test-instance" set t1 row-q cf1:c1=Q
!cbt -project "test-project" -instance "test-instance" set t1 row-r cf1:c1=R
!cbt -project "test-project" -instance "test-instance" set t1 row-s cf1:c1=S
!cbt -project "test-project" -instance "test-instance" set t1 row-t cf1:c1=T
!cbt -project "test-project" -instance "test-instance" set t1 row-u cf1:c1=U
!cbt -project "test-project" -instance "test-instance" set t1 row-v cf1:c1=V
!cbt -project "test-project" -instance "test-instance" set t1 row-w cf1:c1=W
!cbt -project "test-project" -instance "test-instance" set t1 row-x cf1:c1=X
!cbt -project "test-project" -instance "test-instance" set t1 row-y cf1:c1=Y
!cbt -project "test-project" -instance "test-instance" set t1 row-z cf1:c1=Z
import tensorflow as tf
import numpy as np
import tensorflow_io as tfio
import random
random.seed(10)
"""
Explanation: Populate table with values
End of explanation
"""
# If using your bigtable instance replace the project_id, instance_id
# and the name of the table with suitable values.
client = tfio.bigtable.BigtableClient(project_id="test-project", instance_id="test-instance")
train_table = client.get_table("t1")
"""
Explanation: Quickstart
First you need to create a client and a table you would like to read from.
End of explanation
"""
row_set = tfio.bigtable.row_set.from_rows_or_ranges(tfio.bigtable.row_range.infinite())
train_dataset = train_table.read_rows(["cf1:c1"],row_set, output_type=tf.string)
for tensor in train_dataset:
print(tensor)
"""
Explanation: Great! Now you can create a tensorflow dataset that will read the data from our
table.
To do that, you have to provide the type of the data you wish to read,
list of column names in format column_family:column_name, and a row_set that
you would like to read.
To create a row_set use utility methods provided in tensorflow.bigtable.row_set and tensorflow.bigtable.row_range modules. Here a row_set containing all rows is created.
Keep in mind that that bigtable reads values in lexicographical order, not the order they were put in. The rows were given random row-keys so they will be shuffled.
End of explanation
"""
for tensor in train_table.parallel_read_rows(["cf1:c1"],row_set=row_set, num_parallel_calls=2):
print(tensor)
"""
Explanation: That's it! Congrats!
Parallel read
Our dataset supports reading in parallel from Bigtable. To do that, use the parallel_read_rows method and specify num_parallel_calls as an argument. When this method is called work is first split between workers based SampleRowKeys.
Note: Keep in mind that when reading in parallel, the rows are not
going to be read in any particular order.
End of explanation
"""
row_range_below_300 = tfio.bigtable.row_range.right_open("row000", "row300")
my_row_set = tfio.bigtable.row_set.from_rows_or_ranges(row_range_below_300, "row585", "row832")
print(my_row_set)
"""
Explanation: Reading specific row_keys
To read the data from Bigtable, you can specify a set of rows or a range or a
combination of those.
read_rows method expects you to provide a
RowSet. You can construct a RowSet from specific row keys or RowRanges as follows:
End of explanation
"""
my_truncated_row_set = tfio.bigtable.row_set.intersect(my_row_set,
tfio.bigtable.row_range.right_open("row200", "row700"))
print(my_truncated_row_set)
"""
Explanation: such row_set would contain a range of rows [row000, row300) and rows row585 and row832.
you can also create a row_set from an infinite range, empty range or a prefix.
You can also intersect it with a row_range.
End of explanation
"""
from datetime import datetime
start = datetime(2020, 10, 10, 12, 0, 0)
end = datetime(2100, 10, 10, 13, 0, 0)
from_datetime = tfio.bigtable.filters.timestamp_range(start, end)
from_posix_timestamp = tfio.bigtable.filters.timestamp_range(int(start.timestamp()), int(end.timestamp()))
print("from_datetime:", from_datetime)
print("from_posix_timestamp:", from_posix_timestamp)
"""
Explanation: Specifying a version of a value
Bigtable lets you keep many values in one cell with different timestamps. You
can specify which version you want to pick using version filters. However, you
can only retrieve a two dimensional vector using tensorflow.bigtable connector, so latest filter is always appended to the user specified version filter.
Meaning, if more than one value for one cell goes through the provided filter,
the newer shall be used.
You can either use the latest filter passing the newest value, or you can
specify a time range. The time range can be provided either as python datetime
objects or a number representing seconds or microseconds since epoch.
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/community/migration/UJ6 legacy AutoML Natural Language Text Classification.ipynb
|
apache-2.0
|
! pip3 install google-cloud-automl
"""
Explanation: AutoML natural language text classification model
Installation
Install the latest version of AutoML SDK.
End of explanation
"""
! pip3 install google-cloud-storage
"""
Explanation: Install the Google cloud-storage library as well.
End of explanation
"""
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the Kernel
Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Project ID
If you don't know your project ID, try to get your project ID using gcloud command by executing the second cell below.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You cannot use a Multi-Regional Storage bucket for training with AutoML. Not all regions provide support for all AutoML services. For the latest support per region, see Region support for AutoML services
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
"""
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Vertex, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
"""
Explanation: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
"""
! gsutil mb -l $REGION gs://$BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al gs://$BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import json
import time
from google.cloud import automl
from google.protobuf.json_format import MessageToJson
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import AutoML SDK
Import the AutoM SDK into our Python environment.
End of explanation
"""
# AutoM location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
"""
Explanation: AutoML constants
Setup up the following constants for AutoML:
PARENT: The AutoM location root path for dataset, model and endpoint resources.
End of explanation
"""
def automl_client():
return automl.AutoMlClient()
def prediction_client():
return automl.PredictionServiceClient()
def operations_client():
return automl.AutoMlClient()._transport.operations_client
clients = {}
clients["automl"] = automl_client()
clients["prediction"] = prediction_client()
clients["operations"] = operations_client()
for client in clients.items():
print(client)
IMPORT_FILE = "gs://cloud-ml-data/NL-classification/happiness.csv"
! gsutil cat $IMPORT_FILE | head -n 10
"""
Explanation: Clients
The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).
You will use several clients in this tutorial, so set them all up upfront.
End of explanation
"""
dataset = {
"display_name": "happiness_" + TIMESTAMP,
"text_classification_dataset_metadata": {"classification_type": "MULTICLASS"},
}
print(
MessageToJson(
automl.CreateDatasetRequest(parent=PARENT, dataset=dataset).__dict__["_pb"]
)
)
"""
Explanation: Example output:
I went on a successful date with someone I felt sympathy and connection with.,affection
I was happy when my son got 90% marks in his examination,affection
I went to the gym this morning and did yoga.,exercise
We had a serious talk with some friends of ours who have been flaky lately. They understood and we had a good evening hanging out.,bonding
I went with grandchildren to butterfly display at Crohn Conservatory,affection
I meditated last night.,leisure
"I made a new recipe for peasant bread, and it came out spectacular!",achievement
I got gift from my elder brother which was really surprising me,affection
YESTERDAY MY MOMS BIRTHDAY SO I ENJOYED,enjoy_the_moment
Watching cupcake wars with my three teen children,affection
Create a dataset
projects.locations.datasets.create
Request
End of explanation
"""
request = clients["automl"].create_dataset(parent=PARENT, dataset=dataset)
"""
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"dataset": {
"displayName": "happiness_20210228224317",
"textClassificationDatasetMetadata": {
"classificationType": "MULTICLASS"
}
}
}
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/datasets/TCN2705019056410329088"
}
End of explanation
"""
input_config = {"gcs_source": {"input_uris": [IMPORT_FILE]}}
print(
MessageToJson(
automl.ImportDataRequest(name=dataset_id, input_config=input_config).__dict__[
"_pb"
]
)
)
"""
Explanation: projects.locations.datasets.importData
Request
End of explanation
"""
request = clients["automl"].import_data(name=dataset_id, input_config=input_config)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/datasets/TCN2705019056410329088",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://cloud-ml-data/NL-classification/happiness.csv"
]
}
}
}
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result))
"""
Explanation: Response
End of explanation
"""
model = automl.Model(
display_name="happiness_" + TIMESTAMP,
dataset_id=dataset_short_id,
text_classification_model_metadata=automl.TextClassificationModelMetadata(),
)
print(
MessageToJson(automl.CreateModelRequest(parent=PARENT, model=model).__dict__["_pb"])
)
"""
Explanation: Example output:
{}
Train a model
projects.locations.models.create
Request
End of explanation
"""
request = clients["automl"].create_model(parent=PARENT, model=model)
"""
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"model": {
"displayName": "happiness_20210228224317",
"datasetId": "TCN2705019056410329088",
"textClassificationModelMetadata": {}
}
}
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The full unique ID for the training pipeline
model_id = result.name
# The short numeric ID for the training pipeline
model_short_id = model_id.split("/")[-1]
print(model_short_id)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/TCN5333697920992542720"
}
End of explanation
"""
request = clients["automl"].list_model_evaluations(parent=model_id, filter="")
"""
Explanation: Evaluate the model
projects.locations.models.modelEvaluations.list
Call
End of explanation
"""
evaluations_list = [
json.loads(MessageToJson(me.__dict__["_pb"])) for me in request.model_evaluation
]
print(json.dumps(evaluations_list, indent=2))
# The evaluation slice
evaluation_slice = request.model_evaluation[0].name
"""
Explanation: Response
End of explanation
"""
request = clients["automl"].get_model_evaluation(name=evaluation_slice)
"""
Explanation: Example output:
```
[
{
"name": "projects/116273516712/locations/us-central1/models/TCN5333697920992542720/modelEvaluations/1436745357261371663",
"annotationSpecId": "3130761503557287936",
"createTime": "2021-03-01T02:56:28.878044Z",
"evaluatedExampleCount": 1193,
"classificationEvaluationMetrics": {
"auPrc": 0.99065405,
"confidenceMetricsEntry": [
{
"recall": 1.0,
"precision": 0.01424979,
"f1Score": 0.028099174
},
{
"confidenceThreshold": 0.05,
"recall": 1.0,
"precision": 0.5862069,
"f1Score": 0.73913044
},
{
"confidenceThreshold": 0.94,
"recall": 0.64705884,
"precision": 1.0,
"f1Score": 0.7857143
},
# REMOVED FOR BREVITY
{
"confidenceThreshold": 0.999,
"recall": 0.21372032,
"precision": 1.0,
"f1Score": 0.35217392
},
{
"confidenceThreshold": 1.0,
"recall": 0.0026385225,
"precision": 1.0,
"f1Score": 0.005263158
}
],
"logLoss": 0.14686257
},
"displayName": "achievement"
}
]
```
projects.locations.models.modelEvaluations.get
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
test_item = ! gsutil cat $IMPORT_FILE | head -n1
test_item, test_label = str(test_item[0]).split(",")
print(test_item, test_label)
import json
import tensorflow as tf
test_item_uri = "gs://" + BUCKET_NAME + "/test.txt"
with tf.io.gfile.GFile(test_item_uri, "w") as f:
f.write(test_item + "\n")
gcs_input_uri = "gs://" + BUCKET_NAME + "/batch.csv"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
f.write(test_item_uri + "\n")
! gsutil cat $gcs_input_uri
! gsutil cat $test_item_uri
"""
Explanation: Example output:
```
{
"name": "projects/116273516712/locations/us-central1/models/TCN5333697920992542720/modelEvaluations/1436745357261371663",
"annotationSpecId": "3130761503557287936",
"createTime": "2021-03-01T02:56:28.878044Z",
"evaluatedExampleCount": 1193,
"classificationEvaluationMetrics": {
"auPrc": 0.99065405,
"confidenceMetricsEntry": [
{
"recall": 1.0,
"precision": 0.01424979,
"f1Score": 0.028099174
},
{
"confidenceThreshold": 0.05,
"recall": 1.0,
"precision": 0.5862069,
"f1Score": 0.73913044
},
# REMOVED FOR BREVITY
{
"confidenceThreshold": 0.999,
"recall": 0.23529412,
"precision": 1.0,
"f1Score": 0.3809524
},
{
"confidenceThreshold": 1.0,
"precision": 1.0
}
],
"logLoss": 0.005436425
},
"displayName": "exercise"
}
```
Make batch predictions
Prepare files for batch prediction
End of explanation
"""
input_config = {"gcs_source": {"input_uris": [gcs_input_uri]}}
output_config = {
"gcs_destination": {"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"}
}
print(
MessageToJson(
automl.BatchPredictRequest(
name=model_id, input_config=input_config, output_config=output_config
).__dict__["_pb"]
)
)
"""
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210228224317/test.txt
I went on a successful date with someone I felt sympathy and connection with.
projects.locations.models.batchPredict
Request
End of explanation
"""
request = clients["prediction"].batch_predict(
name=model_id, input_config=input_config, output_config=output_config
)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/TCN5333697920992542720",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://migration-ucaip-trainingaip-20210228224317/batch.csv"
]
}
},
"outputConfig": {
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210228224317/batch_output/"
}
}
}
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
destination_uri = output_config["gcs_destination"]["output_uri_prefix"][:-1]
! gsutil ls $destination_uri/*
! gsutil cat $destination_uri/prediction*/*.jsonl
"""
Explanation: Example output:
{}
End of explanation
"""
request = clients["automl"].deploy_model(name=model_id)
"""
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210228224317/batch_output/prediction-happiness_20210228224317-2021-03-01T02:57:02.004934Z/text_classification_1.jsonl
gs://migration-ucaip-trainingaip-20210228224317/batch_output/prediction-happiness_20210228224317-2021-03-01T02:57:02.004934Z/text_classification_2.jsonl
{"textSnippet":{"contentUri":"gs://migration-ucaip-trainingaip-20210228224317/test.txt"},"annotations":[{"annotationSpecId":"5436604512770981888","classification":{"score":0.93047273},"displayName":"affection"},{"annotationSpecId":"3707222255860711424","classification":{"score":0.002518793},"displayName":"achievement"},{"annotationSpecId":"7742447521984675840","classification":{"score":1.3182563E-4},"displayName":"enjoy_the_moment"},{"annotationSpecId":"824918494343593984","classification":{"score":0.06613126},"displayName":"bonding"},{"annotationSpecId":"1977839998950440960","classification":{"score":1.5267624E-5},"displayName":"leisure"},{"annotationSpecId":"8318908274288099328","classification":{"score":8.887557E-6},"displayName":"nature"},{"annotationSpecId":"3130761503557287936","classification":{"score":7.2130124E-4},"displayName":"exercise"}]}
Make online predictions
projects.locations.models.deploy
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result))
"""
Explanation: Response
End of explanation
"""
test_item = ! gsutil cat $IMPORT_FILE | head -n1
test_item, test_label = str(test_item[0]).split(",")
"""
Explanation: Example output:
{}
projects.locations.models.predict
Prepare data item for online prediction
End of explanation
"""
payload = {"text_snippet": {"content": test_item, "mime_type": "text/plain"}}
request = automl.PredictRequest(name=model_id, payload=payload)
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Request
End of explanation
"""
request = clients["prediction"].predict(request=request)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/TCN5333697920992542720",
"payload": {
"textSnippet": {
"content": "I went on a successful date with someone I felt sympathy and connection with.",
"mimeType": "text/plain"
}
}
}
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
delete_dataset = True
delete_model = True
delete_bucket = True
# Delete the dataset using the AutoML fully qualified identifier for the dataset
try:
if delete_dataset:
clients["automl"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the model using the AutoML fully qualified identifier for the model
try:
if delete_model:
clients["automl"].delete_model(name=model_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME
"""
Explanation: Example output:
{
"payload": [
{
"annotationSpecId": "5436604512770981888",
"classification": {
"score": 0.9272586
},
"displayName": "affection"
},
{
"annotationSpecId": "824918494343593984",
"classification": {
"score": 0.068884976
},
"displayName": "bonding"
},
{
"annotationSpecId": "3707222255860711424",
"classification": {
"score": 0.0028119811
},
"displayName": "achievement"
},
{
"annotationSpecId": "3130761503557287936",
"classification": {
"score": 0.0008869726
},
"displayName": "exercise"
},
{
"annotationSpecId": "7742447521984675840",
"classification": {
"score": 0.00013229548
},
"displayName": "enjoy_the_moment"
},
{
"annotationSpecId": "1977839998950440960",
"classification": {
"score": 1.5584701e-05
},
"displayName": "leisure"
},
{
"annotationSpecId": "8318908274288099328",
"classification": {
"score": 9.5975e-06
},
"displayName": "nature"
}
]
}
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation
"""
|
erdewit/ib_insync
|
notebooks/basics.ipynb
|
bsd-2-clause
|
import ib_insync
print(ib_insync.__all__)
"""
Explanation: Basics
Let's first take a look at what's inside the ib_insync package:
End of explanation
"""
from ib_insync import *
util.startLoop()
"""
Explanation: Importing
The following two lines are used at the top of all notebooks. The first line imports everything and the second
starts an event loop to keep the notebook live updated:
End of explanation
"""
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=10)
"""
Explanation: Note that startLoop() only works in notebooks, not in regular Python programs.
Connecting
The main player of the whole package is the "IB" class. Let's create an IB instance and connect to a running TWS/IBG application:
End of explanation
"""
ib.positions()
"""
Explanation: If the connection failed, then verify that the application has the API port enabled and double-check the hostname and port. For IB Gateway the default port is 4002. Make sure the clientId is not already in use.
If the connection succeeded, then ib will be synchronized with TWS/IBG. The "current state" is now available via methods such as ib.positions(), ib.trades(), ib.openTrades(), ib.accountValues() or ib.tickers(). Let's list the current positions:
End of explanation
"""
[v for v in ib.accountValues() if v.tag == 'NetLiquidationByCurrency' and v.currency == 'BASE']
"""
Explanation: Or filter the account values to get the liquidation value:
End of explanation
"""
Contract(conId=270639)
Stock('AMD', 'SMART', 'USD')
Stock('INTC', 'SMART', 'USD', primaryExchange='NASDAQ')
Forex('EURUSD')
CFD('IBUS30')
Future('ES', '20180921', 'GLOBEX')
Option('SPY', '20170721', 240, 'C', 'SMART')
Bond(secIdType='ISIN', secId='US03076KAA60');
"""
Explanation: The "current state" will automatically be kept in sync with TWS/IBG. So an order fill will be added as soon as it is reported, or account values will be updated as soon as they change in TWS.
Contracts
Contracts can be specified in different ways:
* The ibapi way, by creating an empty Contract object and setting its attributes one by one;
* By using Contract and giving the attributes as keyword argument;
* By using the specialized Stock, Option, Future, Forex, Index, CFD, Commodity,
Bond, FuturesOption, MutualFund or Warrant contracts.
Some examples:
End of explanation
"""
contract = Stock('TSLA', 'SMART', 'USD')
ib.reqContractDetails(contract)
"""
Explanation: Sending a request
The IB class has nearly all request methods that the IB API offers. The methods that return a result will block until finished and then return the result. Take for example reqContractDetails:
End of explanation
"""
%time l = ib.positions()
%time l = ib.reqPositions()
"""
Explanation: Current state vs request
Doing a request involves network traffic going up and down and can take considerable time. The current state on the other hand is always immediately available. So it is preferable to use the current state methods over requests. For example, use ib.openOrders() in preference over ib.reqOpenOrders(), or ib.positions() over ib.reqPositions(), etc:
End of explanation
"""
util.logToConsole()
"""
Explanation: Logging
The following will put log messages of INFO and higher level under the current active cell:
End of explanation
"""
import logging
util.logToConsole(logging.DEBUG)
"""
Explanation: To see all debug messages (including network traffic):
End of explanation
"""
ib.disconnect()
"""
Explanation: Disconnecting
The following will disconnect ib and clear all its state:
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.20/_downloads/a0b8e095ddf79d437494061b36af56fb/plot_resolution_metrics.ipynb
|
bsd-3-clause
|
# Author: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_resolution_matrix
from mne.minimum_norm import resolution_metrics
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects/'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fname_evo = data_path + '/MEG/sample/sample_audvis-ave.fif'
# read forward solution
forward = mne.read_forward_solution(fname_fwd)
# forward operator with fixed source orientations
mne.convert_forward_solution(forward, surf_ori=True,
force_fixed=True, copy=False)
# noise covariance matrix
noise_cov = mne.read_cov(fname_cov)
# evoked data for info
evoked = mne.read_evokeds(fname_evo, 0)
# make inverse operator from forward solution
# free source orientation
inverse_operator = mne.minimum_norm.make_inverse_operator(
info=evoked.info, forward=forward, noise_cov=noise_cov, loose=0.,
depth=None)
# regularisation parameter
snr = 3.0
lambda2 = 1.0 / snr ** 2
"""
Explanation: Compute spatial resolution metrics in source space
Compute peak localisation error and spatial deviation for the point-spread
functions of dSPM and MNE. Plot their distributions and difference
distributions.
This example mimics some results from [1]_, namely Figure 3 (peak localisation
error for PSFs, L2-MNE vs dSPM) and Figure 4 (spatial deviation for PSFs,
L2-MNE vs dSPM).
End of explanation
"""
rm_mne = make_inverse_resolution_matrix(forward, inverse_operator,
method='MNE', lambda2=lambda2)
ple_mne_psf = resolution_metrics(rm_mne, inverse_operator['src'],
function='psf', metric='peak_err')
sd_mne_psf = resolution_metrics(rm_mne, inverse_operator['src'],
function='psf', metric='sd_ext')
del rm_mne
"""
Explanation: MNE
Compute resolution matrices, peak localisation error (PLE) for point spread
functions (PSFs), spatial deviation (SD) for PSFs:
End of explanation
"""
rm_dspm = make_inverse_resolution_matrix(forward, inverse_operator,
method='dSPM', lambda2=lambda2)
ple_dspm_psf = resolution_metrics(rm_dspm, inverse_operator['src'],
function='psf', metric='peak_err')
sd_dspm_psf = resolution_metrics(rm_dspm, inverse_operator['src'],
function='psf', metric='sd_ext')
del rm_dspm
"""
Explanation: dSPM
Do the same for dSPM:
End of explanation
"""
brain_ple_mne = ple_mne_psf.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=1,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_ple_mne.add_text(0.1, 0.9, 'PLE MNE', 'title', font_size=16)
brain_ple_dspm = ple_dspm_psf.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=2,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_ple_dspm.add_text(0.1, 0.9, 'PLE dSPM', 'title', font_size=16)
# Subtract the two distributions and plot this difference
diff_ple = ple_mne_psf - ple_dspm_psf
brain_ple_diff = diff_ple.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=3,
clim=dict(kind='value', pos_lims=(0., 1., 2.)))
brain_ple_diff.add_text(0.1, 0.9, 'PLE MNE-dSPM', 'title', font_size=16)
"""
Explanation: Visualize results
Visualise peak localisation error (PLE) across the whole cortex for PSF
End of explanation
"""
brain_sd_mne = sd_mne_psf.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=4,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_sd_mne.add_text(0.1, 0.9, 'SD MNE', 'title', font_size=16)
brain_sd_dspm = sd_dspm_psf.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=5,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_sd_dspm.add_text(0.1, 0.9, 'SD dSPM', 'title', font_size=16)
# Subtract the two distributions and plot this difference
diff_sd = sd_mne_psf - sd_dspm_psf
brain_sd_diff = diff_sd.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=6,
clim=dict(kind='value', pos_lims=(0., 1., 2.)))
brain_sd_diff.add_text(0.1, 0.9, 'SD MNE-dSPM', 'title', font_size=16)
"""
Explanation: These plots show that dSPM has generally lower peak localization error (red
color) than MNE in deeper brain areas, but higher error (blue color) in more
superficial areas.
Next we'll visualise spatial deviation (SD) across the whole cortex for PSF:
End of explanation
"""
|
DS-100/sp17-materials
|
sp17/labs/lab03/lab03.ipynb
|
gpl-3.0
|
import pandas as pd
import numpy as np
import seaborn as sns
%matplotlib inline
import matplotlib.pyplot as plt
# These lines load the tests.
!pip install -U okpy
from client.api.notebook import Notebook
ok = Notebook('lab03.ok')
"""
Explanation: Lab 3: Intro to Visualizations
Authors: Sam Lau, Deb Nolan
Due 11:59pm 02/03/2017 (Completion-based)
Today, we'll learn the basics of plotting using the Python libraries
matplotlib and seaborn! You should walk out of lab today understanding:
The functionality that matplotlib provides
Why we use seaborn for plotting
How to make and customize basic plots, including bar charts, box plots,
histograms, and scatterplots.
As usual, to submit this lab you must scroll down the bottom and set the
i_definitely_finished variable to True before running the submit cell.
Please work in pairs to work on this lab assignment. You will discuss the results with your partner instead of having to write them up in the notebook.
End of explanation
"""
# Set up (x, y) pairs from 0 to 2*pi
xs = np.linspace(0, 2 * np.pi, 300)
ys = np.cos(xs)
# plt.plot takes in x-values and y-values and plots them as a line
plt.plot(xs, ys)
"""
Explanation: matplotlib
matplotlib is the most widely used plotting library available for Python.
It comes with a good amount of out-of-the-box functionality and is highly
customizable. Most other plotting libraries in Python provide simpler ways to generate
complicated matplotlib plots, including seaborn, so it's worth learning a bit about
matplotlib now.
Notice how all of our notebooks have lines that look like:
%matplotlib inline
import matplotlib.pyplot as plt
The %matplotlib inline magic command tells matplotlib to render the plots
directly onto the notebook (by default it will open a new window with the plot).
Then, the import line lets us call matplotlib functions using plt.<func>
Here's a graph of cos(x) from 0 to 2 * pi (you've made this in homework 1
already).
End of explanation
"""
plt.plot(xs, ys)
plt.plot(xs, np.sin(xs))
"""
Explanation: matplotlib also conveniently has the ability to plot multiple things on the
same plot. Just call plt.plot multiple times in the same cell:
End of explanation
"""
# Here's the starting code from last time. Edit / Add code to create the plot above.
plt.plot(xs, ys)
plt.plot(xs, np.sin(xs))
"""
Explanation: Question 0:
That plot looks pretty nice but isn't publication-ready. Luckily, matplotlib
has a wide array of plot customizations.
Skim through the first part of the tutorial at
https://www.labri.fr/perso/nrougier/teaching/matplotlib
to create the plot below. There is a lot of extra information there which we suggest
you read on your own time. For now, just look for what you need to make the plot.
Specifically, you'll have to change the x and y limits, add a title, and add
a legend.
End of explanation
"""
bike_trips = pd.read_csv('bikeshare.csv')
# Here we'll do some pandas datetime parsing so that the dteday column
# contains datetime objects.
bike_trips['dteday'] += ':' + bike_trips['hr'].astype(str)
bike_trips['dteday'] = pd.to_datetime(bike_trips['dteday'], format="%Y-%m-%d:%H")
bike_trips = bike_trips.drop(['yr', 'mnth', 'hr'], axis=1)
bike_trips.head()
"""
Explanation: Dataset: Bikeshare trips
Today, we'll be performing some basic EDA (exploratory data analysis) on
bikeshare data in Washington D.C.
The variables in this data frame are defined as:
instant: record index
dteday : date
season : season (1:spring, 2:summer, 3:fall, 4:winter)
yr : year (0: 2011, 1:2012)
mnth : month ( 1 to 12)
hr : hour (0 to 23)
holiday : whether day is holiday or not
weekday : day of the week
workingday : if day is neither weekend nor holiday
weathersit :
1: Clear or partly cloudy
2: Mist + clouds
3: Light Snow or Rain
4: Heavy Rain or Snow
temp : Normalized temperature in Celsius (divided by 41)
atemp: Normalized feeling temperature in Celsius (divided by 50)
hum: Normalized percent humidity (divided by 100)
windspeed: Normalized wind speed (divided by 67)
casual: count of casual users
registered: count of registered users
cnt: count of total rental bikes including casual and registered
End of explanation
"""
# This plot shows the temperature at each data point
bike_trips.plot.line(x='dteday', y='temp')
# Stop here! Discuss why this plot is shaped like this with your partner.
"""
Explanation: Question 1: Discuss the data with your partner. What is its granularity?
What time range is represented here? Perform your exploration in the cell below.
Using pandas to plot
pandas provides useful methods on dataframes. For simple plots, we prefer to
just use those methods instead of the matplotlib methods since we're often
working with dataframes anyway. The syntax is:
dataframe.plot.<plotfunc>
Where the plotfunc is one of the functions listed here: http://pandas.pydata.org/pandas-docs/version/0.18.1/visualization.html#other-plots
End of explanation
"""
...
"""
Explanation: seaborn
Now, we'll learn how to use the seaborn Python library. seaborn
is built on top of matplotlib and provides many helpful functions
for statistical plotting that matplotlib and pandas don't have.
Generally speaking, we'll use seaborn for more complex statistical plots,
pandas for simple plots (eg. line / scatter plots), and
matplotlib for plot customization.
Nearly all seaborn functions are designed to operate on pandas
dataframes. Most of these functions assume that the dataframe is in
a specific format called long-form, where each column of the dataframe
is a particular feature and each row of the dataframe a single datapoint.
For example, this dataframe is long-form:
country year avgtemp
1 Sweden 1994 6
2 Denmark 1994 6
3 Norway 1994 3
4 Sweden 1995 5
5 Denmark 1995 8
6 Norway 1995 11
7 Sweden 1996 7
8 Denmark 1996 8
9 Norway 1996 7
But this dataframe of the same data is not:
country avgtemp.1994 avgtemp.1995 avgtemp.1996
1 Sweden 6 5 7
2 Denmark 6 8 8
3 Norway 3 11 7
Note that the bike_trips dataframe is long-form.
For more about long-form data, see https://stanford.edu/~ejdemyr/r-tutorials/wide-and-long.
For now, just remember that we typically prefer long-form data and it makes plotting using
seaborn easy as well.
Question 2:
Use seaborn's barplot function to make a bar chart showing the average
number of registered riders on each day of the week over the
entire bike_trips dataset.
Here's a link to the seaborn API: http://seaborn.pydata.org/api.html
See if you can figure it out by reading the docs and talking with your partner.
Once you have the plot, discuss it with your partner. What trends do you
notice? What do you suppose causes these trends?
Notice that barplot draws error bars for each category. It uses bootstrapping
to make those.
End of explanation
"""
...
"""
Explanation: Question 3: Now for a fancier plot that seaborn makes really easy to produce.
Use the distplot function to plot a histogram of all the total rider counts in the
bike_trips dataset.
End of explanation
"""
...
"""
Explanation: Notice that seaborn will fit a curve to the histogram of the data. Fancy!
Question 4: Discuss this plot with your partner. What shape does the distribution
have? What does that imply about the rider counts?
Question 5:
Use seaborn to make side-by-side boxplots of the number of casual riders (just
checked out a bike for that day) and registered riders (have a bikeshare membership).
The boxplot function will plot all the columns of the dataframe you pass in.
Once you make the plot, you'll notice that there are many outliers that make
the plot hard to see. To mitigate this, change the y-scale to be logarithmic.
That's a plot customization so you'll use matplotlib. The boxplot function returns
a matplotlib Axes object which represents a single plot and
has a set_yscale function.
The result should look like:
End of explanation
"""
...
"""
Explanation: Question 6: Discuss with your partner what the plot tells you about the
distribution of casual vs. the distribution of registered riders.
Question 7: Let's take a closer look at the number of registered vs. casual riders.
Use the lmplot function to make a scatterplot. Put the number of casual
riders on the x-axis and the number of registered riders on the y-axis.
Each point should correspond to a single row in your bike_trips dataframe.
End of explanation
"""
# In your plot, you'll notice that your points are larger than ours. That's
# fine. If you'd like them to be smaller, you can add scatter_kws={'s': 6}
# to your lmplot call. That tells the underlying matplotlib scatter function
# to change the size of the points.
...
# Note that the legend for workingday isn't super helpful. 0 in this case
# means "not a working day" and 1 means "working day". Try fixing the legend
# to be more descriptive.
"""
Explanation: Question 8: What do you notice about that plot? Discuss with
your partner. Notice that seaborn automatically fits a line of best
fit to the plot. Does that line seem to be relevant?
You should note that lm_plot allows you to pass in fit_line=False to
avoid plotting lines of best fit when you feel they are unnecessary
or misleading.
Question 9: There seem to be two main groups in the scatterplot. Let's
see if we can separate them out.
Use lmplot to make the scatterplot again. This time, use the hue parameter
to color points for weekday trips differently from weekend trips. You should
get something that looks like:
End of explanation
"""
...
"""
Explanation: Question 10: Discuss the plot with your partner. Was splitting the data
by working day informative? One of the best-fit lines looks valid but the other
doesn't. Why do you suppose that is?
Question 11 (bonus): Eventually, you'll want to be able to pose a
question yourself and answer it using a visualization. Here's a question
you can think about:
How do the number of casual and registered riders change throughout the day,
on average?
See if you can make a plot to answer this.
End of explanation
"""
i_definitely_finished = False
_ = ok.grade('qcompleted')
_ = ok.backup()
_ = ok.submit()
"""
Explanation: Want to learn more?
We recommend checking out the seaborn tutorials on your own time. http://seaborn.pydata.org/tutorial.html
The matplotlib tutorial we linked in Question 1 is also a great refresher on common matplotlib functions: https://www.labri.fr/perso/nrougier/teaching/matplotlib/
Here's a great blog post about the differences between Python's visualization libraries:
https://dansaber.wordpress.com/2016/10/02/a-dramatic-tour-through-pythons-data-visualization-landscape-including-ggplot-and-altair/
Submission
Change i_definitely_finished to True and run the cells below to submit the lab. You may resubmit as many times you want. We will be grading you on effort/completion.
End of explanation
"""
|
letsgoexploring/teaching
|
winter2017/econ129/python/Econ129_Winter2017_Homework2.ipynb
|
mit
|
# Question 1
"""
Explanation: Homework 2 (DUE: Thursday February 16)
Instructions: Complete the instructions in this notebook. You may work together with other students in the class and you may take full advantage of any internet resources available. You must provide thorough comments in your code so that it's clear that you understand what your code is doing and so that your code is readable.
Submit the assignment by saving your notebook as an html file (File -> Download as -> HTML) and uploading it to the appropriate Dropbox folder on EEE.
Question 1
For each of the following first-difference processes, compute the values of $y$ from $t=0$ through $t = 20$. For each, assume that $y_0 = 0$, $w_1 = 1$, and $w_2 = w_3 = \cdots w_T = 0$.
$y_t = 0.99y_{t-1} + w_t$
$y_t = y_{t-1} + w_t$
$y_t = 1.01y_{t-1} + w_t$
Plot the the simulated values for each process on the same axes and be sure to include a legend.
End of explanation
"""
# Question 2
"""
Explanation: Question 2
For each of the following first-difference processes, compute the values of $y$ from $t=0$ through $t = 12$. For each, assume that $y_0 = 0$.
$y_t = 1 + 0.5y_{t-1}$
$y_t = 0.5y_{t-1}$
$y_t = -1 + 0.5y_{t-1}$
Plot the the simulated values for each process on the same axes and be sure to include a legend. Set the $y$-axis limits to $[-3,3]$.
End of explanation
"""
# Question 3.1
# Question 3.2
# Question 3.3
"""
Explanation: Question 3
Download a file called Econ129_US_Production_A_Data.csv from the link "Production data for the US" under the "Data" section on the course website. The file contains annual production data for the US economy including ouput, consumption, investment, and labor hours, among others. The capital stock of the US is only given for 1948. Import the data into a Pandas DataFrame and do the following:
Suppose that the depreciation rate for the US is $\delta = 0.0375$. Use the capital accumulation equation $K_{t+1} = I_t + (1-\delta)K_t$ to fill in the missing values for the capital column. Construct a plot of the computed capital stock.
Add columns to your DataFrame equal to capital per worker and output per worker by dividing the capital and output columns by the labor column. Print the first five rows of the DataFrame.
Print the average annual growth rates of capital per worker and output per worker for the US.
Recall that the average annnual growth rate of a quantity $y$ from date $0$ to date $T$ is:
\begin{align}
g & = \left(\frac{y_T}{y_0}\right)^{\frac{1}{T}}-1
\end{align}
End of explanation
"""
# Initialize parameters for the simulation (A, s, T, delta, alpha, g, n, K0, A0, L0)
# Initialize a variable called tfp as a (T+1)x1 array of zeros and set first value to A0
# Compute all subsequent tfp values by iterating over t from 0 through T
# Plot the simulated tfp series
# Initialize a variable called labor as a (T+1)x1 array of zeros and set first value to L0
# Compute all subsequent labor values by iterating over t from 0 through T
# Plot the simulated labor series
# Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to K0
# Compute all subsequent capital values by iterating over t from 0 through T
# Plot the simulated capital series
# Store the simulated capital, labor, and tfp data in a pandas DataFrame called data
# Print the first 5 frows of the DataFrame
# Create columns in the DataFrame to store computed values of the other endogenous variables: Y, C, and I
# Print the first five rows of the DataFrame
# Create columns in the DataFrame to store capital per worker, output per worker, consumption per worker, and investment per worker
# Print the first five rows of the DataFrame
# Create a 2x2 grid of plots of capital, output, consumption, and investment
# Create a 2x2 grid of plots of capital per worker, output per worker, consumption per worker, and investment per worker
"""
Explanation: Question 4: The Solow model with exogenous population and TFP growth
Suppose that the aggregate production function is given by:
\begin{align}
Y_t & = A_tK_t^{\alpha} L_t^{1-\alpha}, \tag{1}
\end{align}
where $Y_t$ denotes output, $K_t$ denotes the capital stock, $L_t$ denotes the labor supply, and $A_t$ denotes total factor productivity $TFP$. $\alpha$ is a constant.
The supply of labor grows at an exogenously determined rate $n$ and so it's value is determined recursively by a first-order difference equation:
\begin{align}
L_{t+1} & = (1+n) L_t. \tag{2}
\end{align}
Likewise, TFP grows at an exogenously determined rate $g$:
\begin{align}
A_{t+1} & = (1+g) A_t. \tag{3}
\end{align}
The rest of the economy is characterized by the same equations as before:
\begin{align}
C_t & = (1-s)Y_t \tag{4}\
Y_t & = C_t + I_t \tag{5}\
K_{t+1} & = I_t + ( 1- \delta)K_t. \tag{6}\
\end{align}
Equation (4) is the consumption function where $s$ denotes the exogenously given saving rate. Equation (5) is the aggregate market clearing condition. Finally, Equation (6) is the capital evolution equation specifying that capital in year $t+1$ is the sum of newly created capital $I_t$ and the capital stock from year $t$ that has not depreciated $(1-\delta)K_t$.
Combine Equations (1) and (4) through (6) to eliminate $C_t$, $I_t$, and $Y_t$ and obtain a recurrence relation specifying $K_{t+1}$ as a funtion of $K_t$, $A_t$, and $L_t$:
\begin{align}
K_{t+1} & = sA_tK_t^{\alpha}L_t^{1-\alpha} + ( 1- \delta)K_t \tag{7}
\end{align}
Given an initial values for capital and labor, Equations (2), (3), and (7) can be iterated on to compute the values of the capital stock and labor supply at some future date $T$. Furthermore, the values of consumption, output, and investment at date $T$ can also be computed using Equations (1), (4), (5), and (6).
Simulation
Simulate the Solow growth model with exogenous labor growth for $t=0\ldots 100$. For the simulation, assume the following values of the parameters:
\begin{align}
A & = 10\
\alpha & = 0.35\
s & = 0.15\
\delta & = 0.1\
g & = 0.015 \
n & = 0.01
\end{align}
Furthermore, suppose that the initial values of capital and labor are:
\begin{align}
K_0 & = 2\
A_0 & = 1\
L_0 & = 1
\end{align}
End of explanation
"""
# Question 5.1
# Question 5.2
"""
Explanation: Question 5
Recall the Solow growth model with exogenous growth in labor and TFP:
\begin{align}
Y_t & = A_tK_t^{\alpha} L_t^{1-\alpha}, \tag{1}\
C_t & = (1-s)Y_t \tag{2}\
Y_t & = C_t + I_t \tag{3}\
K_{t+1} & = I_t + ( 1- \delta)K_t \tag{4}\
L_{t+1} & = (1+n) L_t \tag{5} \
A_{t+1} & = (1+g) A_t. \tag{6}
\end{align}
Suppose that two countries called Westeros and Essos are identical except that TFP in Westeros grows faster than in Essos. Specifically:
\begin{align}
g_{Westeros} & = 0.03\
g_{Essos} & = 0.01
\end{align}
Otherwise, the parameters for each economy are the same including the initial values of capital, labor, and TFP:
\begin{align}
\alpha & = 0.35\
s & = 0.15\
\delta & = 0.1\
n & = 0.01\
K_0 & = 20\
A_0 & = 10\
L_0 & = 1
\end{align}
Do the following:
Find the date (value for $t$) at which output per worker in Westeros becomes at least twice as large as output per worker in Essos. Print the value for t and the values of ouput per worker for each country.
On a single set of axes, plot simulated values of output per worker for each country for t = $1, 2, \ldots 100$.
Hint: Copy into this notebook the function that simulates the Solow model with exogenous labor growth from the end of the Notebook from Class 9. Modify the function to fit this problem.
End of explanation
"""
|
cathalmccabe/PYNQ
|
boards/Pynq-Z1/base/notebooks/video/hdmi_video_pipeline.ipynb
|
bsd-3-clause
|
from pynq.overlays.base import BaseOverlay
from pynq.lib.video import *
base = BaseOverlay("base.bit")
"""
Explanation: Video Pipeline Details
This notebook goes into detail about the stages of the video pipeline in the base overlay and is written for people who want to create and integrate their own video IP. For most regular input and output use cases the high level wrappers of HDMIIn and HDMIOut should be used.
Both the input and output pipelines in the base overlay consist of four stages, an HDMI frontend, a colorspace converter, a pixel format converter, and the video DMA. For the input the stages are arranged Frontend -> Colorspace Converter -> Pixel Format -> VDMA with the order reversed for the output side. The aim of this notebook is to give you enough information to use each stage separately and be able to modify the pipeline for your own ends.
Before exploring the pipeline we'll import the entire pynq.lib.video module where all classes relating to the pipelines live. We'll also load the base overlay to serve as an example.
The following table shows the IP responsible for each stage in the base overlay which will be referenced throughout the rest of the notebook
|Stage | Input IP | Output IP |
|------------------|:---------------------------------------|:-----------------------------------|
|Frontend (Timing) |video/hdmi_in/frontend/vtc_in |video/hdmi_out/frontend/vtc_out |
|Frontend (Other) |video/hdmi_in/frontend/axi_gpio_hdmiin|video/hdmi_out/frontend/axi_dynclk|
|Colour Space |video/hdmi_in/color_convert |video/hdmi_out/color_convert |
|Pixel Format |video/hdmi_in/pixel_pack |video/hdmi_outpixel_unpack |
|VDMA |video/axi_vdma |video/axi_vdam |
End of explanation
"""
hdmiin_frontend = base.video.hdmi_in.frontend
"""
Explanation: HDMI Frontend
The HDMI frontend modules wrap all of the clock and timing logic. The HDMI input frontend can be used independently from the rest of the pipeline by accessing its driver from the base overlay.
End of explanation
"""
hdmiin_frontend.start()
hdmiin_frontend.mode
"""
Explanation: Creating the device will signal to the computer that a monitor is connected. Starting the frontend will wait attempt to detect the video mode, blocking until a lock can be achieved. Once the frontend is started the video mode will be available.
End of explanation
"""
hdmiout_frontend = base.video.hdmi_out.frontend
"""
Explanation: The HDMI output frontend can be accessed in a similar way.
End of explanation
"""
hdmiout_frontend.mode = hdmiin_frontend.mode
hdmiout_frontend.start()
"""
Explanation: and the mode must be set prior to starting the output. In this case we are just going to use the same mode as the input.
End of explanation
"""
colorspace_in = base.video.hdmi_in.color_convert
colorspace_out = base.video.hdmi_out.color_convert
bgr2rgb = [0, 0, 1,
0, 1, 0,
1, 0, 0,
0, 0, 0]
colorspace_in.colorspace = bgr2rgb
colorspace_out.colorspace = bgr2rgb
colorspace_in.colorspace
"""
Explanation: Note that nothing will be displayed on the screen as no video data is currently being send.
Colorspace conversion
The colorspace converter operates on each pixel independently using a 3x4 matrix to transform the pixels. The converter is programmed with a list of twelve coefficients in the folling order:
| |in1 |in2 |in3 | 1 |
|-----|----|----|----|----|
|out1 |c1 |c2 |c3 |c10 |
|out2 |c4 |c5 |c6 |c11 |
|out3 |c7 |c8 |c9 |c12 |
Each coefficient should be a floating point number between -2 and +2.
The pixels to and from the HDMI frontends are in BGR order so a list of coefficients to convert from the input format to RGB would be:
[0, 0, 1,
0, 1, 0,
1, 0, 0,
0, 0, 0]
reversing the order of the pixels and not adding any bias.
The driver for the colorspace converters has a single property that contains the list of coefficients.
End of explanation
"""
pixel_in = base.video.hdmi_in.pixel_pack
pixel_out = base.video.hdmi_out.pixel_unpack
pixel_in.bits_per_pixel = 8
pixel_out.bits_per_pixel = 8
pixel_in.bits_per_pixel
"""
Explanation: Pixel format conversion
The pixel format converters convert between the 24-bit signal used by the HDMI frontends and the colorspace converters to either an 8, 24, or 32 bit signal. 24-bit mode passes the input straight through, 32-bit pads the additional pixel with 0 and 8-bit mode selects the first channel in the pixel. This is exposed by a single property to set or get the number of bits.
End of explanation
"""
inputmode = hdmiin_frontend.mode
framemode = VideoMode(inputmode.width, inputmode.height, 8)
vdma = base.video.axi_vdma
vdma.readchannel.mode = framemode
vdma.readchannel.start()
vdma.writechannel.mode = framemode
vdma.writechannel.start()
frame = vdma.readchannel.readframe()
vdma.writechannel.writeframe(frame)
"""
Explanation: Video DMA
The final element in the pipeline is the video DMA which transfers video frames to and from memory. The VDMA consists of two channels, one for each direction which operate completely independently. To use a channel its mode must be set prior to start being called. After the DMA is started readframe and writeframe transfer frames. Frames are only transferred once with the call blocking if necessary. asyncio coroutines are available as readframe_async and writeframe_async which yield instead of blocking. A frame of the size of the output can be retrieved from the VDMA by calling writechannel.newframe(). This frame is not guaranteed to be initialised to blank so should be completely written before being handed back.
End of explanation
"""
vdma.readchannel.tie(vdma.writechannel)
"""
Explanation: In this case, because we are only using 8 bits per pixel, only the red channel is read and displayed.
The two channels can be tied together which will ensure that the input is always mirrored to the output
End of explanation
"""
vdma.readchannel.stop()
vdma.writechannel.stop()
"""
Explanation: Frame Ownership
The VDMA driver has a strict method of frame ownership. Any frames returned by readframe or newframe are owned by the user and should be destroyed by the user when no longer needed by calling frame.freebuffer(). Frames handed back to the VDMA with writeframe are no longer owned by the user and should not be touched - the data may disappear at any time.
Cleaning up
It is vital to stop the VDMA before reprogramming the bitstream otherwise the memory system of the chip can be placed into an undefined state. If the monitor does not power on when starting the VDMA this is the likely cause.
End of explanation
"""
|
ljvmiranda921/pyswarms
|
docs/examples/usecases/train_neural_network.ipynb
|
mit
|
# Import modules
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
# Import PySwarms
import pyswarms as ps
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
"""
Explanation: Training a Neural Network
In this example, we'll be training a neural network using particle swarm optimization. For this we'll be using the standard global-best PSO pyswarms.single.GBestPSO for optimizing the network's weights and biases. This aims to demonstrate how the API is capable of handling custom-defined functions.
For this example, we'll try to classify the three iris species in the Iris Dataset.
End of explanation
"""
data = load_iris()
# Store the features as X and the labels as y
X = data.data
y = data.target
"""
Explanation: First, we'll load the dataset from scikit-learn. The Iris Dataset contains 3 classes for each of the iris species (iris setosa, iris virginica, and iris versicolor). It has 50 samples per class with 150 samples in total, making it a very balanced dataset. Each sample is characterized by four features (or dimensions): sepal length, sepal width, petal length, petal width.
Load the iris dataset
End of explanation
"""
n_inputs = 4
n_hidden = 20
n_classes = 3
num_samples = 150
def logits_function(p):
""" Calculate roll-back the weights and biases
Inputs
------
p: np.ndarray
The dimensions should include an unrolled version of the
weights and biases.
Returns
-------
numpy.ndarray of logits for layer 2
"""
# Roll-back the weights and biases
W1 = p[0:80].reshape((n_inputs,n_hidden))
b1 = p[80:100].reshape((n_hidden,))
W2 = p[100:160].reshape((n_hidden,n_classes))
b2 = p[160:163].reshape((n_classes,))
# Perform forward propagation
z1 = X.dot(W1) + b1 # Pre-activation in Layer 1
a1 = np.tanh(z1) # Activation in Layer 1
logits = a1.dot(W2) + b2 # Pre-activation in Layer 2
return logits # Logits for Layer 2
# Forward propagation
def forward_prop(params):
"""Forward propagation as objective function
This computes for the forward propagation of the neural network, as
well as the loss.
Inputs
------
params: np.ndarray
The dimensions should include an unrolled version of the
weights and biases.
Returns
-------
float
The computed negative log-likelihood loss given the parameters
"""
logits = logits_function(params)
# Compute for the softmax of the logits
exp_scores = np.exp(logits)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Compute for the negative log likelihood
corect_logprobs = -np.log(probs[range(num_samples), y])
loss = np.sum(corect_logprobs) / num_samples
return loss
"""
Explanation: Constructing a custom objective function
Recall that neural networks can simply be seen as a mapping function from one space to another. For now, we'll build a simple neural network with the following characteristics:
* Input layer size: 4
* Hidden layer size: 20 (activation: $\tanh(x)$)
* Output layer size: 3 (activation: $softmax(x)$)
Things we'll do:
1. Create a forward_prop method that will do forward propagation for one particle.
2. Create an overhead objective function f() that will compute forward_prop() for the whole swarm.
What we'll be doing then is to create a swarm with a number of dimensions equal to the weights and biases. We will unroll these parameters into an n-dimensional array, and have each particle take on different values. Thus, each particle represents a candidate neural network with its own weights and bias. When feeding back to the network, we will reconstruct the learned weights and biases.
When rolling-back the parameters into weights and biases, it is useful to recall the shape and bias matrices:
* Shape of input-to-hidden weight matrix: (4, 20)
* Shape of input-to-hidden bias array: (20, )
* Shape of hidden-to-output weight matrix: (20, 3)
* Shape of hidden-to-output bias array: (3, )
By unrolling them together, we have $(4 * 20) + (20 * 3) + 20 + 3 = 163$ parameters, or 163 dimensions for each particle in the swarm.
The negative log-likelihood will be used to compute for the error between the ground-truth values and the predictions. Also, because PSO doesn't rely on the gradients, we'll not be performing backpropagation (this may be a good thing or bad thing under some circumstances).
Now, let's write the forward propagation procedure as our objective function. Let $X$ be the input, $z_l$ the pre-activation at layer $l$, and $a_l$ the activation for layer $l$:
Neural network architecture
End of explanation
"""
def f(x):
"""Higher-level method to do forward_prop in the
whole swarm.
Inputs
------
x: numpy.ndarray of shape (n_particles, dimensions)
The swarm that will perform the search
Returns
-------
numpy.ndarray of shape (n_particles, )
The computed loss for each particle
"""
n_particles = x.shape[0]
j = [forward_prop(x[i]) for i in range(n_particles)]
return np.array(j)
"""
Explanation: Now that we have a method to do forward propagation for one particle (or for one set of dimensions), we can then create a higher-level method to compute forward_prop() to the whole swarm:
End of explanation
"""
%%time
# Initialize swarm
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of PSO
dimensions = (n_inputs * n_hidden) + (n_hidden * n_classes) + n_hidden + n_classes
optimizer = ps.single.GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options)
# Perform optimization
cost, pos = optimizer.optimize(f, iters=1000)
"""
Explanation: Performing PSO on the custom-function
Now that everything has been set-up, we just call our global-best PSO and run the optimizer as usual. For now, we'll just set the PSO parameters arbitrarily.
End of explanation
"""
def predict(pos):
"""
Use the trained weights to perform class predictions.
Inputs
------
pos: numpy.ndarray
Position matrix found by the swarm. Will be rolled
into weights and biases.
"""
logits = logits_function(pos)
y_pred = np.argmax(logits, axis=1)
return y_pred
"""
Explanation: Checking the accuracy
We can then check the accuracy by performing forward propagation once again to create a set of predictions. Then it's only a simple matter of matching which one's correct or not. For the logits, we take the argmax. Recall that the softmax function returns probabilities where the whole vector sums to 1. We just take the one with the highest probability then treat it as the network's prediction.
Moreover, we let the best position vector found by the swarm be the weight and bias parameters of the network.
End of explanation
"""
(predict(pos) == y).mean()
"""
Explanation: And from this we can just compute for the accuracy. We perform predictions, compare an equivalence to the ground-truth value y, and get the mean.
End of explanation
"""
|
quantumfx/binary-lens
|
oldcode/lens_simulation_gauss_pulse.ipynb
|
gpl-3.0
|
res = 10000 # 1 sample is 1/res*1.6ms, 1e7 to resolve 311Mhz
n = 40 * res # grid size, total time
n = n-1 # To get the wave periodicity edge effects to work out
#freq = 311.25 # MHz, observed band
freq = 0.5 # Test, easier on computation
p_spin = 1.6 # ms, spin period
freq *= 1e6 #MHz to Hz
p_spin *= 1e-3 #ms to s
p_phase = 1./freq # s, E field oscillation period
R = 6.273 # s, pulsar-companion distance
gradient = 0.00133
#gradient = 60.0e-5/0.12
intime = p_spin/res # sample -> time in s conversion factor
insample = res/p_spin # time in s -> samples conversion factor
ipulsar = 18*n/32 # Pulsar position for the geometric delay, will be scanned through
t = np.linspace(0., n*intime, n) #time array
# Gaussian for fitting
def gaussian(x,a,b,c):
return a * np.exp(-(x - c)**2/(2 * b**2))
# x in the following functions should be in s
# The small features in DM change are about 7.5ms wide
def tau_str(x):
# return np.piecewise(x, [x < (n/2)*intime, x >= (n/2)*intime], [1.0e-5, 1.0e-5])
return np.piecewise(x, [x < (n/2)*intime, x >= (n/2)*intime], [1.0e-5, lambda x: 1.0e-5+gradient*(x-(n/2)*intime)])
def tau_geom(x, xpulsar):
return (xpulsar-x)**2/(2.*R)
def tau_phase(x, xpulsar):
return -tau_str(x) + tau_geom(x, xpulsar)
def tau_group(x, xpulsar):
return tau_str(x) + tau_geom(x, xpulsar)
#def tau_str(x):
# return Piecewise((1.0e-5, x < (n/2)*intime), (1.0e-5+gradient*(x-(n/2)*intime), x >= (n/2)*intime))
#plt.figure(figsize=(14,4))
plt.plot(t,tau_str(t), label="Dispersive delay")
plt.plot(t,tau_geom(t, ipulsar*intime), label="Geometric delay")
plt.plot(t,tau_phase(t, ipulsar*intime), label="Phase delay")
plt.plot(t,tau_group(t, ipulsar*intime), label="Group delay")
plt.xlim(0,n*intime)
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.legend(loc="best")
plt.xlabel('sample', fontsize=16)
plt.ylabel(r't (s)', fontsize=16)
plt.savefig('images/delays.png')
"""
Explanation: Function definitions and some other parameters
End of explanation
"""
# The grid size of mean_profile.npy is 1000
mean_profile = np.load('mean_profile.npy')
meanprof_Jy = (mean_profile / np.median(mean_profile) - 1) * 12.
xarr = np.arange(1000)
popt, pcov = curve_fit(gaussian, xarr, meanprof_Jy, bounds=([-np.inf,-np.inf,800.],[np.inf,np.inf,890.]))
plt.plot(meanprof_Jy, label="Mean profile")
plt.plot(xarr, gaussian(xarr, *popt), label="Gaussian fitted to main pulse")
plt.legend(loc="best", fontsize=8)
plt.xlabel('sample', fontsize=16)
plt.ylabel(r'$F_{\nu}$ (Jy)', fontsize=16)
plt.xlim(0,1000)
plt.ylim(-.1, 0.8)
plt.savefig('images/gaussfit_MP.png')
"""
Explanation: Load a mean pulse profile for B1957+20, and fit a gaussian to the main pulse
End of explanation
"""
xarr = np.arange(n)
pulse_params = popt
pulse_params[1] *= res/1000 # resizing the width to current gridsize
pulse_params[2] = n/2 # centering
pulse_ref = gaussian(xarr, *pulse_params)
envel_ref = np.sqrt(pulse_ref)
plt.plot(envel_ref, label="sqrt of the fitted gaussian")
plt.legend(loc="upper right", fontsize=8)
plt.xlabel('sample', fontsize=16)
plt.ylabel(r'$\sqrt{F_{\nu}}$ (rt Jy)', fontsize=16)
plt.xlim(0,n)
plt.ylim(-.1, 0.8)
plt.savefig('images/gaussfit_center.png')
"""
Explanation: Isolate the main pulse, sqrt it, and center
End of explanation
"""
angular_freq = 2*np.pi*freq
phase_ref = np.exp(1j*angular_freq*t)
print phase_ref[0], phase_ref[-1]
#phase_ref = np.roll(phase_ref,3)
plt.plot(t, phase_ref.imag, label="Imaginary part")
plt.plot(t, phase_ref.real, label="Real part")
plt.xlim(0, 1*1./freq)
plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0))
plt.legend(loc="upper right")
plt.xlabel('t (s)', fontsize=16)
plt.ylabel(r'Amplitude', fontsize=16)
plt.savefig('images/oscillations.png')
"""
Explanation: Invent some sinusoidal wave at frequency of observed band. Assuming waves of the form e^(i \omega t)
End of explanation
"""
a = 109797 # rt(kg)*m/s^2/A; a = sqrt(2*16MHz/(c*n*epsilon_0)), conversion factor between
# sqrt(Jansky) and E field strength assuming n=1 and a 16MHz bandwidth
b = 1e-13 # rt(kg)/s; a*b = 1.1e-8 V/m
E_field_ref = a*b*envel_ref*phase_ref
E_field_ref = np.roll(E_field_ref, (int)(1e-5*insample))
plt.figure(figsize=(14,4))
#plt.plot(t, np.abs(E_field_ref), label="\'Theoretical\' $E$ field")
plt.plot(t, np.real(E_field_ref), label="im \'Theoretical\' $E$ field")
plt.plot(t, np.imag(E_field_ref), label="real \'Theoretical\' $E$ field")
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.xlim((n/2-7*pulse_params[1])*intime,(n/2+7*pulse_params[1])*intime)
plt.legend(loc="best")
plt.xlabel('t (s)', fontsize=16)
plt.ylabel(r'$E$ (V/m)', fontsize=16)
plt.savefig('pulse_E_field.png')
"""
Explanation: Compute an electric field
End of explanation
"""
k_int = 30 # Integration range = +- period*k_int
lim = (int)(np.sqrt(k_int*p_phase*2*R)*insample) # How far does tau_geom have to go to get to k_int periods of E oscillation
# The last argument should be >> 2*k_int (m) to resolve oscillations of E
#int_domain = np.linspace(ipulsar-lim, ipulsar+lim, 3000*np.sqrt(k_int))
int_domain = np.linspace(0,0.064*insample,20000)
#int_domain = (np.random.random(10000)*2*lim)-lim
#print lim, (int)(tau_geom(np.amax(int_domain)*intime, n/2*intime)*insample)
def delay_field(i):
phase = np.roll(phase_ref, (int)(tau_phase(i*intime, ipulsar*intime)*insample))
padsize = (int)(tau_group(i*intime, ipulsar*intime)*insample)
envel = np.pad(envel_ref, (padsize,0), mode='constant')[:-padsize]
#envel = np.roll(envel_ref, (int)(tau_group(i*intime, ipulsar*intime)*insample))
E_field = a*b*phase*envel
return E_field
def delay_field_flat(i):
phase = np.roll(phase_ref, (int)((-1.0e-5+tau_geom(i*intime, ipulsar*intime))*insample))
padsize = (int)(( 1.0e-5+tau_geom(i*intime, ipulsar*intime))*insample)
envel = np.pad(envel_ref, (padsize,0), mode='constant')[:-padsize]
#envel = np.roll(envel_ref, (int)(( 1.0e-5+tau_geom(i*intime, ipulsar*intime))*insample))
E_field = a*b*phase*envel
return E_field
#This was used to find the optimal k_int, it seems to be around 30 for this particular frequency
#heightprev = 10
#error = 10
#k = 31
#heights = []
#while error > 0.0001 and k < 100:
# E_tot = np.zeros(n, dtype=np.complex128)
# lim = (int)(np.sqrt(k*p_phase*2*R)*insample)
# int_domain = np.linspace(-lim, lim, 10000*np.sqrt(k))
# for i in int_domain:
# E_tot += delay_field(i)
#
# popt3, pcov3 = curve_fit(gaussian, xarr, np.abs(E_tot), bounds=([-np.inf,-np.inf,n/2-n/15],[np.inf,np.inf,n/2+n/15]))
# heights.append(popt3[0])
# error = np.abs(popt3[0]-heightprev)/heightprev
# print k, popt3[0], popt3[1], popt3[2]
# heightprev = popt3[0]
# k += 1
#print "Stopped with error = ", error
#plt.plot(heights)
E_tot_flat = np.zeros(n, dtype=np.complex128)
E_tot = np.zeros(n, dtype=np.complex128)
print 'Integrating with', int_domain.size, 'samples'
print 'Total size', n
print 'Integration range', ipulsar, "+-", lim
print 'Integration range', ipulsar*intime*1e3, "+-", lim*intime*1e3, 'ms'
print 'Integration range in y +-', tau_geom(lim*intime, ipulsar*intime)*1e6, 'us'
tick = time()
for i in int_domain:
E_tot += delay_field(i)
E_tot_flat += delay_field_flat(i)
print 'Time elapsed:', time() - tick, 's'
print np.amax(np.abs(E_field_ref)), np.amax(np.abs(E_tot))
# This parallelizes really terribly for some reason.... Maybe take a look later.
# Possibly because this is in a notebook?
# The if is required apparently so we don't get an infinite loop (in Windows in particular)
#import multiprocessing
#E_tot = np.zeros(n, dtype=np.complex128)
#if __name__ == '__main__':
# pool = multiprocessing.Pool()
# print multiprocessing.cpu_count()
# currenttime = time()
# E_tot = sum(pool.imap_unordered(delay_field, ((i) for i in int_domain), chunksize=int_domain.size/16))
# print 'Time elapsed:', time() - currenttime
# pool.close()
# pool.join()
plt.figure(figsize=(14,4))
#plt.plot(t.reshape(-1,1e3).mean(axis=1), E_tot.reshape(-1,1e3).mean(axis=1)) # downsampling before plotting.
plt.plot(t, np.abs(E_tot_flat), label="Flat lens $E$ field")
plt.plot(t, np.abs(E_tot), label="Kinked lens $E$ field")
plt.xlim((n/2-10*pulse_params[1])*intime,(n/2+50*pulse_params[1])*intime)
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.legend(loc="best")
plt.xlabel('t (s)', fontsize=16)
plt.ylabel(r'$E$ (V/m)', fontsize=16)
plt.savefig('images/lensed_pulse.png')
"""
Explanation: Now we have to delay the phase and the pulse(group) and integrate. We will parallelize it so we don't have to wait too long.
End of explanation
"""
#E_tot *= np.amax(np.abs(E_field_ref))/np.amax(np.abs(E_tot)) # this scaling is ad-hoc, really should do it for the pre-lensed E field
#pulse_tot = E_tot * np.conjugate(E_tot) / (a*b)**2
#if np.imag(pulse_tot).all() < 1e-50:
# pulse_tot = np.real(pulse_tot)
#else:
# print 'Not purely real, what\'s going on?'
# Check if it's still gaussian by fitting a gaussian
#pulse_tot = pulse_tot.reshape(-1,n/1000).mean(axis=1)
#popt2, pcov2 = curve_fit(gaussian, x, pulse_tot, bounds=([-np.inf,-np.inf,400.],[np.inf,np.inf,600.]))#
#plt.plot(np.roll(meanprof_Jy,-1000/3-22), label="Mean profile main pulse") # This fitting is also ad-hoc, should do least square?
#plt.plot(xarr, gaussian(xarr, popt[0], popt[1], popt2[2]), label="Gaussian fitted to main pulse")
#plt.plot(pulse_tot, label="Integrated refitted curve")
#plt.plot(xarr, gaussian(xarr, *popt2), label="Gaussian fitted to integrated curve (overlaps)") # Looks like it's still gaussian
#plt.legend(loc="best", fontsize=7)
#plt.xlabel('sample', fontsize=16)
#plt.ylabel(r'$F_{\nu}$ (Jy)', fontsize=16)
#plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
#plt.xlim(450,580)
#plt.ylim(-.1, .8)
"""
Explanation: Now, we have to backfit the integrated pulse back onto the observational data
End of explanation
"""
|
machlearn/ipython-notebooks
|
ML Process - Preprocessing.ipynb
|
mit
|
from sklearn import preprocessing
import numpy as np
X = np.array([[1.,-1.,2.],
[2., 0.,0.],
[0., 1.,-1.]])
X_scaled = preprocessing.scale(X)
X_scaled
X_scaled.mean(axis = 0)
X_scaled.std(axis=0)
"""
Explanation: Package: sklearn.preprocessing
change raw feature vectors into a representation that is more suitable for the downstream estimators
Standardization, or mean removal and variance scaling
Standardization of datasets is a common requirement for many machine learning estimators implemented in the scikit. They might behave badly if the individual features do not more or less look like standard normally distributed data:Gaussian with zero mean and unit variance
The function scale provides a quick and easy way to perform this operation on a single array-like dataset:
End of explanation
"""
scaler = preprocessing.StandardScaler().fit(X)
scaler
scaler.mean_
scaler.scale_
scaler.transform(X)
scaler.transform([[-1.,1.,0.]])
"""
Explanation: Another utility class: StandardScaler, that implements the Transformer API to compute the mean and standard deviation on a training dataset so as to be able to later reapply the same transformation on the testing set.
End of explanation
"""
X_train = np.array([[1., -1., 2.],
[2., 0., 0.],
[0., 1.,-1.]])
min_max_scaler = preprocessing.MinMaxScaler()
X_train_minmax = min_max_scaler.fit_transform(X_train)
X_train_minmax
X_test = np.array([[-3., -1., 4.]])
X_test_minmax = min_max_scaler.transform(X_test)
X_test_minmax
min_max_scaler.scale_
min_max_scaler.min_
"""
Explanation: Scaling features to a range
Example: scaling feature to lie between a given minimum and maximum value, often between zero and one. Or the maximum absolute value of each feature is scaled to unit size
Use MinMaxScaler or MaxAbsScaler.
End of explanation
"""
X = [[ 1., -1., 2.],
[ 2., 0., 0.],
[ 0., 1.,-1.]]
X_normalized = preprocessing.normalize(X,norm='l2')
X_normalized
"""
Explanation: MaxAbsScaler works in a very similar fashion, but scales in a way that the training data lies within the range [-1,1] by dividing through the largest maximum value in each feature. It is used for data that is already centered at zero or sparse data.
Scaling sparse data
MaxAbsScaler and maxabs_scale were specifically designed for scaling sparse data, and are the recommend way to go about this.
...
Scaling data with outliers
If your data contains many outliers, scaling using the mean and variance of the data is likely to not work very well. You can use robust_scale and RobustScaler as drop-in replacements instead.
...
Centering kernel matrices
...
Normalization
Normalization is the process of scaling individual samples to have unit norm. This process can be useful if you plan to use a quadratic form such as the dot-product or any other kernel to quantify the similarity of any pair of samples.
The function normalize provides a quick and easy way to perform this operation on a single array-like dataset, either using the l1 or l2 norms:
End of explanation
"""
X = [[ 1., -1., 2.],
[ 2., 0., 0.],
[ 0., 1.,-1.]]
binarizer = preprocessing.Binarizer(). fit(X) # fit does nothing
binarizer
binarizer.transform(X)
binarizer = preprocessing.Binarizer(threshold=1.1)
binarizer.transform(X)
"""
Explanation: Sparse input
normalize and Normalizer accept both dense array-like and sparse matrices from scipy.sparse as input.
Binarization
Feature binarization is the process of thresholding numerical features to get boolean values.
...
As for the Normalizer, the utility class Binarizer is meant to be used in the early stages of sklearn.pipeline.Pipeline.
End of explanation
"""
enc = preprocessing.OneHotEncoder()
enc.fit([[0,0,3],[1,1,0],[0,2,1],[1,0,2]])
enc.transform([[0,1,3]]).toarray()
"""
Explanation: The preprocessing module provides a companion function binarize to be used when the transformer API is not necessary.
binarize and Binarizer accept both dense array-like and sparse matrices from scipy.sparse as input.
Encoding categorical features
Integer representation cannot be used directly with scikit-learn estimators, as these expect continuous input, and would interpret the categories as being ordered, which is often not desired.
One possibility to convert categorical features to features that can be used with scikit-learn estimators is to use a one-of-K or one-hot encoding, which is implemented in OneHotEncoder. This estimator transforms each categorical feature with m possible values into m binary features, with only one active.
End of explanation
"""
|
gabrielelanaro/chemview
|
notebooks/QuickStart.ipynb
|
lgpl-2.1
|
from chemview import MolecularViewer
"""
Explanation: To import chemview you can write and execute the following code in a cell:
End of explanation
"""
import numpy as np
coordinates = np.array([[0.00, 0.13, 0.00], [0.12, 0.07, 0.00], [0.12,-0.07, 0.00],
[0.00,-0.14, 0.00], [-0.12,-0.07, 0.00],[-0.12, 0.07, 0.00],
[ 0.00, 0.24, 0.00], [ 0.21, 0.12, 0.00], [ 0.21,-0.12, 0.00],
[ 0.00,-0.24, 0.00], [-0.21,-0.12, 0.00],[-0.21, 0.12, 0.00]])
atomic_types = ['C', 'C', 'C', 'C', 'C', 'C', 'H', 'H', 'H', 'H', 'H', 'H']
bonds = [(0, 6), (1, 7), (2, 8), (3, 9), (4, 10), (5, 11),
(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 0)]
"""
Explanation: To display a benzene molecule we need at least two pieces of information:
The atomic types
The atomic coordinates
The bonds between atoms
For the scope of this tutorial, the information were extracted from here. You can use a chemical package (like mdtraj or chemlab) to read the coordinates of your molecules.
We define the coordinates as a numpy array, the atomic types as a list of strings and the bonds as a list of start, end tuples.
End of explanation
"""
mv = MolecularViewer(coordinates, topology={'atom_types': atomic_types,
'bonds': bonds})
mv.ball_and_sticks()
mv
"""
Explanation: We can pass those to the class MolecularViewer and call the method lines to render the molecule as a wireframe:
End of explanation
"""
|
vitojph/2016progpln
|
notebooks/9-tweepy.ipynb
|
mit
|
import tweepy
# añade las credenciales de tu aplicación de twitter como cadenas de texto
CONSUMER_KEY = 'CAMBIA ESTO'
CONSUMER_SECRET = 'CAMBIA ESTO'
ACCESS_TOKEN = 'CAMBIA ESTO'
ACCESS_TOKEN_SECRET = 'CAMBIA ESTO'
# autentica las credenciales
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
# crea un cliente de twiter
t = tweepy.API(auth)
# publica un saludo
t.update_status('¡Hola! Soy un bot, no me hagas caso.')
"""
Explanation: Interactuando con Twitter
Introducción
En este cuaderno vamos a crear un bot de twitter (un programa que interactúa de manera semi-automática con twitter) que muestre ciertas capacidades lingüísticas.
A continuación vamos a revisar algunos snippets o ejemplos de código mostrando las instrucciones mínimas que puede incluir un bot para que sea capaz de publicar mensajes. También trataré de mostrar cómo acceder a otras funcionalidades, algo más complejas.
Cuenta de desarrollador en twitter
Para poder crear un bot y, en general, cualquier aplicación que interactúe con twitter, necesitas tener una cuenta con permisos de desarrollador. Sigue los pasos:
Lo primero que tienes que hacer, si no tienes cuenta en twitter, es registrarte. Si ya tienes cuenta en esta red, te recomiendo que la uses y no te crees otra nueva. El proceso de alta de twitter requiere la validación de un número de teléfono móvil único para cada cuenta. Así que, si no tienes más de un número de móvil y no quieres dolores de cabeza, utiliza tu cuenta de todos los días.
Una vez registrado, haz login con tu cuenta y entra en la página de gestión de apps de twitter, donde podrás solicitar permisos para una app nueva. Pincha en Create New App.
Rellena los campos requeridos del formulario que se muestra:
Name: tiene que ser un nombre único, así que sé original. Es probable que tengas que probar varios nombres hasta dar con uno que esté libre. Si no quieres fallar, pon algo como progplnbot-tunombre.
Description: describe brevemente el propósito de tu bot.
Website: por el momento, cualquier URL servirá. Si no tienes web propia y no sabes qué poner, introduce la URL de la web de Lingüística: http://www.ucm.es/linguistica
Lee (cough, cough) el texto del acuerdo, selecciona Yes, I agree y pincha el botón de la parte inferior de la página que dice Create your Twitter application.
Si todo ha ido bien (si has elegido para tu app un nombre que estuviera libre, básicamente), se te mostrará una página de configuración. Tendrás que cambiar algunos parámetros.
En el apartado Application Settings, en Access level, asegúrate de que tienes seleccionados permisos de lectura y escritura: Read and Write. Si no es así, pincha en modify app permissions y otorga permisos para leer y escribir. Una vez cambiado, pincha en Update Settings.
En la pestaña Keys and Access Tokens, pincha en el botón Create my access token para crear los tokens de acceso. Estos nombres probablemente no te digan nada, pero desde esta página tienes acceso a las cuatro credenciales que tu bot necesita para autenticarse y publicar mensajes. Toma nota de ellas, las necesitarás más adelante:
Consumer key
Consumer secret
Access token
Access token secret
Ya tienes todo lo que necesitas para crear tu bot.
Ejemplo mínimo de bot con Python y tweepy.
El código que aparece a continuación contiene las líneas mínimas para crear un cliente de twitter y publicar un mensaje en twitter.
Consejo: No lo ejecutes más de una vez o correrás el riesgo de que te baneen la cuenta.
End of explanation
"""
# añade esta librería al principio de tu código
import time
# algunas citas memorables de Yogi Berra (https://es.wikipedia.org/wiki/Yogi_Berra)
citas = '''The future ain't what it used to be.|
You can observe a lot by watching.|
It ain't over till it's over.|
It ain't the heat, it's the humility.|
We made too many wrong mistakes.|
I never said half the things I said.'''.split('|\n')
# iteramos sobre las citas y las publicamos de una en una
for cita in citas:
t.update_status(cita + ' #yogiberra')
time.sleep(30) # envía el tweet cada 30 segundos
"""
Explanation: Hay que ser cuidadoso a la hora de enviar mensaje automatizados. Si queremos publicar más de un mensaje y queremos controlar el tiempo que transcurre entre un mensaje y otro, podemos utilizar la librería time de la siguiente manera:
End of explanation
"""
# buscamos mensajes que contengan la expresión "gaticos y monetes"
busqueda = t.search(q='gaticos y monetes')
# itero sobre estos mensajes
for mensaje in busqueda:
# capturo el nombre de usuario
usuario = mensaje.user.screen_name
# compongo el mensaje de respuesta
miRespuesta = '@%s ¡monetes!' % (usuario)
# envío la respuesta
mensaje = t.update_status(miRespuesta, mensaje.id)
"""
Explanation: Otra funcionalidad típica que tienen los bots de twitter es enviar automáticamente mensajes a otros usuarios. Si queremos que nuestra aplicación busque entre todos los mensajes y responda al autor, podemos hacer lo siguiente:
End of explanation
"""
# recupero las últimsa 5 menciones de mi usuario
menciones = t.mentions_timeline(count=5)
# ten en cuenta que:
# si tu cuenta de twitter es nueva y no tiene menciones, no funcionará
# si estás usando tu cuenta de todos los días, como hago yo,
# enviarás mensajes a esas personas (o robotitos)
for mencion in menciones:
# capturo el nombre de usuario que me manda el mensaje
usuario = mencion.user.screen_name
# compongo el mensaje de respuesta
miRespuesta = '¡Hola, @%s! Soy un robotito. Este es un mensaje automático, no le hagas caso' % (usuario)
# envío la respuesta
mensaje = t.update_status(miRespuesta, mention.id)
"""
Explanation: Si queremos responder a menciones, es decir, si queremos que el bot responda automáticamente a mensajes dirigidos a él, podemos utilizar el siguiente código:
End of explanation
"""
# buscamos mensajes que contengan la expresión "viejóvenes"
busqueda = t.search(q='viejóvenes')
# para que no se descontrole, solo quiero retwitear los tres últimos mensajes
if len(busqueda) >= 3:
for mensaje in busqueda[:3]:
# para retwitear un mensaje, ejecuto el método retweet
# indicando el identificador único del mensaje en cuestión
t.retweet(mensaje.id)
else:
for mensaje in busqueda:
t.retweet(mensaje.id)
"""
Explanation: Si queremos retuitear mensajes, podemos ejecutar lo siguiente:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/niwa/cmip6/models/ukesm1-0-ll/ocnbgchem.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'ukesm1-0-ll', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NIWA
Source ID: UKESM1-0-LL
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:30
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/structured/solutions/1b_prepare_data_babyweight.ipynb
|
apache-2.0
|
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
"""
Explanation: LAB 1b: Prepare babyweight dataset.
Learning Objectives
Setup up the environment
Preprocess natality dataset
Augment natality dataset
Create the train and eval tables in BigQuery
Export data from BigQuery to GCS in CSV format
Introduction
In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.
In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Set up environment variables and load necessary libraries
Check that the Google BigQuery library is installed and if not, install it.
End of explanation
"""
import os
from google.cloud import bigquery
"""
Explanation: Note: Restart your kernel to use updated packages.
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
Import necessary libraries.
End of explanation
"""
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR PROJECT NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
"""
Explanation: Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
End of explanation
"""
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
"""
Explanation: The source dataset
Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value we’ll train a model to predict.
Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called babyweight if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
"""
Explanation: Create the training and evaluation data tables
Since there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to weight_pounds, is_male, mother_age, plurality, and gestation_weeks as well as some simple filtering and a column to hash on for repeatable splitting.
Note: The dataset in the create table code below is the one created previously, e.g. "babyweight".
Preprocess and filter dataset
We have some preprocessing and filtering we would like to do to get our data in the right format for training.
Preprocessing:
* Cast is_male from BOOL to STRING
* Cast plurality from INTEGER to STRING where [1, 2, 3, 4, 5] becomes ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]
* Add hashcolumn hashing on year and month
Filtering:
* Only want data for years later than 2000
* Only want baby weights greater than 0
* Only want mothers whose age is greater than 0
* Only want plurality to be greater than 0
* Only want the number of weeks of gestation to be greater than 0
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
"""
Explanation: Augment dataset to simulate missing data
Now we want to augment our dataset with our simulated babyweight data by setting all gender information to Unknown and setting plurality of all non-single births to Multiple(2+).
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
"""
Explanation: Split augmented dataset into train and eval sets
Using hashmonth, apply a modulo to get approximately a 75/25 train/eval split.
Split augmented dataset into train dataset
End of explanation
"""
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
"""
Explanation: Split augmented dataset into eval dataset
End of explanation
"""
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
"""
Explanation: Verify table creation
Verify that you created the dataset and training data table.
End of explanation
"""
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
BUCKET = "your bucket id"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
"""
Explanation: Export from BigQuery to CSVs in GCS
Use BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.Update BUCKET with your bucket ID
End of explanation
"""
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
"""
Explanation: Verify CSV creation
Verify that we correctly created the CSV files in our bucket.
End of explanation
"""
|
ozorich/phys202-2015-work
|
assignments/assignment06/InteractEx05.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
from IPython.display import SVG
"""
Explanation: Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
"""
s = """
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
"""
SVG(s)
"""
Explanation: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
End of explanation
"""
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
"""Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
"""
circle="""
<svg width="%d" height="%d">
<circle cx="%d" cy="%d" r="%d" fill="%s"/>
</svg>""" %(width, height,cx,cy,r,fill) #used %s to include variables into the svg drawing of circle
display(SVG(circle))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
"""
Explanation: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
End of explanation
"""
w=interactive(draw_circle,width=fixed(300),height=fixed(300), cx=(0,300,10),cy=(0,300,10), r=(0,50,5), fill='red')
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
"""
Explanation: Use interactive to build a user interface for exploing the draw_circle function:
width: a fixed value of 300px
height: a fixed value of 300px
cx/cy: a slider in the range [0,300]
r: a slider in the range [0,50]
fill: a text area in which you can type a color's name
Save the return value of interactive to a variable named w.
End of explanation
"""
display(w)
assert True # leave this to grade the display of the widget
"""
Explanation: Use the display function to show the widgets created by interactive:
End of explanation
"""
|
wasit7/tutorials
|
notebooks/ipcluster.ipynb
|
mit
|
from IPython import parallel
c=parallel.Client()
dview=c.direct_view()
dview.block=True
"""
Explanation: IPython.parallel
To start the cluster, you can use notebook GUI or command line $ipcluster start
End of explanation
"""
c.ids
"""
Explanation: Check a number of cores
End of explanation
"""
import numpy as np
x=np.arange(100)
dview.scatter('x',x)
print c[0]['x']
print c[1]['x']
print c[-1]['x']
"""
Explanation: Simple parallel summation
First the input array is initialized and distributed over the cluster
End of explanation
"""
dview.execute('import numpy as np; y=np.sum(x)')
ys=dview.gather('y')
total=np.sum(ys)
print total
"""
Explanation: Parallel sum
Engines computes sumation of each subset and send back to the controller
End of explanation
"""
|
probml/pyprobml
|
notebooks/book2/28/poisson_lds_example.ipynb
|
mit
|
!pip install -qq git+git://github.com/lindermanlab/ssm-jax-refactor.git
try:
import ssm
except ModuleNotFoundError:
%pip install -qq ssm
import ssm
"""
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/poisson_lds_example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Linear Dynamical System with Poisson likelihood
Code modified from
https://github.com/lindermanlab/ssm-jax-refactor/blob/main/notebooks/poisson-lds-example.ipynb
End of explanation
"""
import jax.numpy as np
import jax.random as jr
import jax.experimental.optimizers as optimizers
from jax import jit, value_and_grad, vmap
try:
from tqdm.auto import trange
except ModuleNotFoundError:
%pip install -qq tqdm
from tqdm.auto import trange
import matplotlib.pyplot as plt
try:
from tensorflow_probability.substrates import jax as tfp
except ModuleNotFoundError:
%pip install -qq tensorflow-probability
from tensorflow_probability.substrates import jax as tfp
try:
from ssm.lds.models import GaussianLDS, PoissonLDS
except ModuleNotFoundError:
%pip install -qq ssm
from ssm.lds.models import GaussianLDS, PoissonLDS
from ssm.distributions.linreg import GaussianLinearRegression
from ssm.utils import random_rotation
from ssm.plots import plot_dynamics_2d
from matplotlib.gridspec import GridSpec
def plot_emissions(states, data):
latent_dim = states.shape[-1]
emissions_dim = data.shape[-1]
num_timesteps = data.shape[0]
plt.figure(figsize=(8, 6))
gs = GridSpec(2, 1, height_ratios=(1, emissions_dim / latent_dim))
# Plot the continuous latent states
lim = abs(states).max()
plt.subplot(gs[0])
for d in range(latent_dim):
plt.plot(states[:, d] + lim * d, "-")
plt.yticks(np.arange(latent_dim) * lim, ["$x_{}$".format(d + 1) for d in range(latent_dim)])
plt.xticks([])
plt.xlim(0, num_timesteps)
plt.title("Sampled Latent States")
lim = abs(data).max()
plt.subplot(gs[1])
for n in range(emissions_dim):
plt.plot(data[:, n] - lim * n, "-k")
plt.yticks(-np.arange(emissions_dim) * lim, ["$y_{{ {} }}$".format(n + 1) for n in range(emissions_dim)])
plt.xlabel("time")
plt.xlim(0, num_timesteps)
plt.title("Sampled Emissions")
plt.tight_layout()
def plot_emissions_poisson(states, data):
latent_dim = states.shape[-1]
emissions_dim = data.shape[-1]
num_timesteps = data.shape[0]
plt.figure(figsize=(8, 6))
gs = GridSpec(2, 1, height_ratios=(1, emissions_dim / latent_dim))
# Plot the continuous latent states
lim = abs(states).max()
plt.subplot(gs[0])
for d in range(latent_dim):
plt.plot(states[:, d] + lim * d, "-")
plt.yticks(np.arange(latent_dim) * lim, ["$z_{}$".format(d + 1) for d in range(latent_dim)])
plt.xticks([])
plt.xlim(0, time_bins)
plt.title("Sampled Latent States")
lim = abs(data).max()
plt.subplot(gs[1])
plt.imshow(data.T, aspect="auto", interpolation="none")
plt.xlabel("time")
plt.xlim(0, time_bins)
plt.yticks(ticks=np.arange(emissions_dim))
# plt.ylabel("Neuron")
plt.title("Sampled Emissions (Counts / Time Bin)")
plt.tight_layout()
plt.colorbar()
def plot_dynamics(lds, states):
q = plot_dynamics_2d(
lds._dynamics.weights,
bias_vector=lds._dynamics.bias,
mins=states.min(axis=0),
maxs=states.max(axis=0),
color="blue",
)
plt.plot(states[:, 0], states[:, 1], lw=2, label="Latent State")
plt.plot(states[0, 0], states[0, 1], "*r", markersize=10, label="Initial State")
plt.xlabel("$z_1$")
plt.ylabel("$z_2$")
plt.title("Latent States & Dynamics")
plt.legend(bbox_to_anchor=(1, 1))
# plt.show()
def extract_trial_stats(trial_idx, posterior, all_data, all_states, fitted_lds, true_lds):
# Posterior Mean
Ex = posterior.mean()[trial_idx]
states = all_states[trial_idx]
data = all_data[trial_idx]
# Compute the data predictions
C = fitted_lds.emissions_matrix
d = fitted_lds.emissions_bias
Ey = Ex @ C.T + d
Covy = C @ posterior.covariance()[trial_idx] @ C.T
# basically recover the "true" input to the Poisson GLM
Ey_true = states @ true_lds.emissions_matrix.T + true_lds.emissions_bias
return states, data, Ex, Ey, Covy, Ey_true
def compare_dynamics(Ex, states, data):
# Plot
fig, axs = plt.subplots(1, 2, figsize=(8, 4))
q = plot_dynamics_2d(
true_lds._dynamics.weights,
bias_vector=true_lds._dynamics.bias,
mins=states.min(axis=0),
maxs=states.max(axis=0),
color="blue",
axis=axs[0],
)
axs[0].plot(states[:, 0], states[:, 1], lw=2)
axs[0].plot(states[0, 0], states[0, 1], "*r", markersize=10, label="$z_{init}$")
axs[0].set_xlabel("$z_1$")
axs[0].set_ylabel("$z_2$")
axs[0].set_title("True Latent States & Dynamics")
q = plot_dynamics_2d(
fitted_lds._dynamics.weights,
bias_vector=fitted_lds._dynamics.bias,
mins=Ex.min(axis=0),
maxs=Ex.max(axis=0),
color="red",
axis=axs[1],
)
axs[1].plot(Ex[:, 0], Ex[:, 1], lw=2)
axs[1].plot(Ex[0, 0], Ex[0, 1], "*r", markersize=10, label="$z_{init}$")
axs[1].set_xlabel("$z_1$")
axs[1].set_ylabel("$z_2$")
axs[1].set_title("Simulated Latent States & Dynamics")
plt.tight_layout()
# plt.show()
def compare_smoothened_predictions(Ey, Ey_true, Covy, data):
data_dim = data.shape[-1]
plt.figure(figsize=(15, 6))
plt.plot(Ey_true + 10 * np.arange(data_dim))
plt.plot(Ey + 10 * np.arange(data_dim), "--k")
for i in range(data_dim):
plt.fill_between(
np.arange(len(data)),
10 * i + Ey[:, i] - 2 * np.sqrt(Covy[:, i, i]),
10 * i + Ey[:, i] + 2 * np.sqrt(Covy[:, i, i]),
color="k",
alpha=0.25,
)
plt.xlabel("time")
plt.ylabel("data and predictions (for each neuron)")
plt.plot([0], "--k", label="Predicted") # dummy trace for legend
plt.plot([0], "-k", label="True")
plt.legend(loc="upper right")
# plt.show()
# Some parameters to define our model
emissions_dim = 5 # num_observations
latent_dim = 2
seed = jr.PRNGKey(0)
# Initialize our true Poisson LDS model
true_lds = PoissonLDS(num_latent_dims=latent_dim, num_emission_dims=emissions_dim, seed=seed)
"""
Explanation: Imports and Plotting Functions
End of explanation
"""
import warnings
num_trials = 5
time_bins = 200
# catch annoying warnings of tfp Poisson sampling
rng = jr.PRNGKey(0)
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=UserWarning)
all_states, all_data = true_lds.sample(key=rng, num_steps=time_bins, num_samples=num_trials)
plot_emissions_poisson(all_states[0], all_data[0])
plt.savefig("poisson-hmm-data.pdf")
plt.savefig("poisson-hmm-data.png")
"""
Explanation: Sample some synthetic data from the Poisson LDS
End of explanation
"""
latent_dim = 2
seed = jr.PRNGKey(32) # NOTE: different seed!
test_lds = PoissonLDS(num_emission_dims=emissions_dim, num_latent_dims=latent_dim, seed=seed)
rng = jr.PRNGKey(10)
elbos, fitted_lds, posteriors = test_lds.fit(all_data, method="laplace_em", rng=rng, num_iters=25)
# NOTE: you could also call the laplace_em routine directly like this
# from ssm.inference.laplace_em import laplace_em
# elbos, fitted_lds, posteriors = laplace_em(rng,
# test_lds,
# all_data,
# num_iters=25,
# laplace_mode_fit_method="BFGS")
plt.plot(elbos)
plt.show()
num_emissions_channels_to_view = 5
num_trials_to_view = 1
# Ex is expected hidden states, Ey is expected visible values
for trial_idx in range(num_trials_to_view):
states, data, Ex, Ey, Covy, Ey_true = extract_trial_stats(
trial_idx, posteriors, all_data, all_states, fitted_lds, true_lds
)
compare_dynamics(Ex, states, data)
plt.savefig("poisson-hmm-dynamics.pdf")
plt.savefig("poisson-hmm-dynamics.png")
plt.show()
compare_smoothened_predictions(
Ey[:, :num_emissions_channels_to_view],
Ey_true[:, :num_emissions_channels_to_view],
Covy,
data[:, :num_emissions_channels_to_view],
)
plt.savefig("poisson-hmm-trajectory.pdf")
plt.savefig("poisson-hmm-trajectory.png")
plt.show()
"""
Explanation: Inference: let's fit a Poisson LDS to our data
Since we have a Poisson emissions model, we can no longer perform exact EM.
Instead, we perform Laplace EM, in which we approximate the posterior using a Laplace (Gaussian) approximation.
End of explanation
"""
|
KiranArun/A-Level_Maths
|
Differentiation/Differentiation.ipynb
|
mit
|
# With all python examples, beware that python can't handle numbers too small so some results will be inaccurate
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Differentiation
End of explanation
"""
def f(x): # sample function
return x**2
# graph data arrays
x = np.linspace(-10,10,210)
y = f(x)
# our x value
a = float(input('x value from -10 to 10\n'))
# delta x, smallest value possible, python cant handle nummbers too small
dx = 0.000001
#dy dx equation to find gradient
M = (f(a+dx)-f(a)) / dx
#y = mx + c equation for tangent
tan = M * (x-a) + f(a)
# set up plot
fig = plt.figure()
ax = fig.add_subplot(111)
# plot graph
ax.plot(x,y,a,f(a),'om',x,tan,)
ax.grid(True)
ax.axis([-10, 10, -100, 100])
plt.show()
print 'dydx = ',M
print '2 x', a, '=', 2*a
"""
Explanation: Limits
limits are useful in functions to achieve (or get as close as possible to) the result.
$$\lim_{x\to c}f(x) = L$$
In this case, $f(x)$ is being made as close to L as possible by making x sufficiently close to c.
read as:
the limit of f of x, as x approaches c, equals L
To find the gradient of a line, we use $M = \frac{dy}{dx}$
The derivative of $f(x)$ is the gradient of the tangent of line $y = f(x)$ at general point $(x,y)$
To find the derivative/gradient:
$M = \frac{dy}{dx}$
$dx = x_1 - x$
$dy = f(x+dx) - f(x)$
$\frac{dy}{dx} = \frac{f(x+dx)-f(x)}{dx}$
$$\frac{dy}{dx} = \lim_{dx\to 0}\frac{f(x+dx)-f(x)}{dx}$$
The limit will make the difference in $x$ and $x_1$ so small, the tangent will be equal (or close enough) to the gradient at x
Example:
$f(x) = x^2$
$\frac{dy}{dx} = \lim_{dx\to 0}\frac{f(x+dx)-f(x)}{dx}$
$\frac{dy}{dx} = \lim_{dx\to 0}\frac{(x+dx)^2 - x^2}{dx}$
$\frac{dy}{dx} = \lim_{dx\to 0}\frac{x^2 + 2x(dx) + (dx)^2 - x^2}{dx}$
$\frac{dy}{dx} = \lim_{dx\to 0}2x + dx$
if dx is as close to zero as possible, it is equivillant to:
$$\frac{dy}{dx} = 2x$$
End of explanation
"""
# Python example of previous example
# since we are using a computer, we dont need the chain rule but this is to prove it works
# function from previous example
def f(x):
return (3*x**2 + 5*x)**10
# create line data
x_data = np.linspace(0,0.4,4001)
y = f(x_data)
# set params
x = 0.3
d = 0.000001
# use equation from example
M_chain = (60*x + 50)*(3*x**2 + 5*x)**9
# create tangent
tan_chain = M_chain * (x_data-x) + f(x)
# use equation from first princeple
M = (f(x+d)-f(x)) /d
# create tangent
tan = M * (x_data-x) + f(x)
# plot
fig, ax = plt.subplots()
fig, ax2 = plt.subplots()
ax.grid(True)
ax2.grid(True)
ax.plot(x_data,y,x,f(x),'om',x_data,tan,'g:')
ax2.plot(x_data,y,x,f(x),'om',x_data,tan_chain,'r--')
plt.show()
print 'dydx = ',M
print 'using chain rule:\ndydx =', M_chain
"""
Explanation: to find the gradient for $x^n$:
$f(x)=x^2$
  $\frac{d}{dx}f(x) = 2x$
$f(x)=x^3$
  $\frac{d}{dx}f(x) = 3x^2$
$f(x)=x^4$
  $\frac{d}{dx}f(x) = 4x^3$
$f(x)=5x^3 + 3x$
  $\frac{d}{dx}f(x) = 15x^2 + 3$
the formula is:
$f(x) = ax^n$
$\frac{d}{dx}f(x) = anx^{n-1}$
With y in this format:
$y = \frac{a}{x^n}$
We can use this equation:
$\frac{dy}{dx} = -\frac{an}{x^{n+1}}$
Example 1:
$y = \frac{2}{x^3}$
$\frac{dy}{dx} = -\frac{6}{x^4}$
Example 2:
$y = \frac{1}{x}-\frac{2}{x^2}+\frac{3}{x^3}$
$\frac{dy}{dx} = -\frac{1}{x^2}+\frac{4}{x^3}-\frac{9}{x^4}$
Quick Rules:
Sum/Difference Rule:
(sum for addition and difference for subtraction)
$$\frac{d}{dx}[f(x)+g(x)] = \frac{d}{dx}f(x) + \frac{d}{dx}g(x)$$
Constant Multiple Rule:
where c is a constant and f is a function
$$(cf(x))′=c(f(x))'$$
Chain Rule
The Chain Rule is used to find the derivative of an equation with a power too large to expand using the binomial theorem
$$\frac{d}{dx}f(g(x)) = f'(g(x))\cdot g'(x)$$
Example 1:
$y = (3x^2 + 5x)^{10}$
let $u = 3x^2 + 5x$
so now the derivative of the function being applied to u:
$y = u^{10}$
$\frac{dy}{du} = 10u^9$
also differentiating u itself:
$\frac{du}{dx} = 6x + 5$
and to combine them:
$\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}$
the $du$'s cancel each other out
so the answer is:
(for an exam we can skip the middel steps and go to here directly)
$\frac{dy}{dx} = 10(3x^2 + 5x)^9(6x + 5)$
$\frac{dy}{dx} = (60x + 50)(3x^2 + 5x)^9$
End of explanation
"""
# sample function
def f(x):
return x**2
# derivative function
def f_p(x):
return (f(x+dx)-f(x)) / dx
# second derivative function
def f_pp(x):
return ((f_p(x+dx))-f_p(x)) / dx
# graph data arrays
x_data = np.linspace(-10,10,201)
y = f(x_data)
# our x value
x = float(input('x value from -10 to 10\n'))
# delta x, smallest value possible, python cant handle nummbers too small
dx = 0.0000001
# find derivative
M = f_p(x)
# create tangent line to plot
tan = M * (x_data-x) + f(x)
# find second derivative
M2 = f_pp(x)
# create tangent line to plot
tan2 = M2 * (x_data-x) + f(x)
# set up plot
fig, ax = plt.subplots()
# plot graph
ax.plot(x_data,y,x,f(x),'om',x_data,tan,x_data,tan2)
ax.grid(True)
#ax.axis([-10, 10, -100, 100])
plt.show()
print "f' = ",M
print "f'' = ",M2
"""
Explanation: Example 2:
$y = \frac{2}{x^2 + 2}$
$y = 2(x^2 + 2)^{-1}$
$\frac{dy}{dx} = -2(x^2 + 2)^{-2}(2x)$
$\frac{dy}{dx} = \frac{-4x}{(x^2 + 2)^2}$
Product Rule
The product rule is used when there are two terms being multiplied in the equation you are trying to differentiate
$$\frac{d}{dx}uv = u\frac{dv}{dx} + v\frac{du}{dx}$$
Example 1:
$y = (2x + 1)^4(3x^2 - 5x)$
to find:
$\frac{d}{dx}(2x + 1)^4(3x^2 - 5x)$
using:
$\frac{d}{dx}uv = u\frac{dv}{dx} + v\frac{du}{dx}$
$ = (2x + 1)^4(6x - 5) + (3x^2 - 5x)(4(2x + 1)^3(2))$
$ = (6x - 5)(2x + 1)^4 + 8(3x^2 - 5x)(2x + 1)^3$
Example 2:
$y = (3x^2 + 1)(x^3 + x)^2$
$= (3x^2 + 1)(2(x^3 + x)(3x^2 + 1)) + (x^3 + x)^2(6x)$
$= 2(x^3 + x)(3x^2 + 1)^2 +6x(x^3 + 3)^2$
Example 3:
$y = \sqrt{2a^2x^2 + 2x}(ax + b)^3$
$= (2a^2x^2 + 2x)^{\frac{1}{2}}(ax + b)^3$
$= (2a^2x^2 + 2x)^{\frac{1}{2}}(3(ax + b)^2(a)) + (\frac{1}{2}(2a^2x^2 + 2x)^{-\frac{1}{2}}(4a^2x + 2))(ax + b)^3$
$= (2a^2x^2 + 2x)^{\frac{1}{2}}(ax + b)^3$
$= (2a^2x^2 + 2x)^{\frac{1}{2}}(3(ax + b)^2(a)) + (\frac{1}{2}(2a^2x^2 + 2x)^{-\frac{1}{2}}(4a^2x + 2))(ax + b)^3$
$= 3a(ax + b)^2\sqrt{2a^2x^2 + 2x} + \frac{(2a^2x + 1)(ax + b)^3}{\sqrt{2a^2x^2 + 2x}}$
Quotient Rule
To Differentiate a fraction, similar to product rule
$$\frac{d}{dx}\frac{u}{v}=\frac{v\frac{du}{dx}-u\frac{dv}{dx}}{v^2}$$
Example 1:
$y = \frac{3x + 2}{2x - 1}$
to find:
$\frac{d}{dx}\frac{3x + 2}{2x - 1}$
using:
$\frac{d}{dx}\frac{u}{v}=\frac{v\frac{du}{dx}-u\frac{dv}{dx}}{v^2}$
$= \frac{(2x - 1)(3) - (3x + 2)(2)}{(2x - 1)^2}$
$= \frac{6x - 3 - 6x - 4}{(2x - 1)^2}$
$= \frac{-7}{(2x - 1)^2}$
Example 2:
$y = \frac{3x + 2}{\sqrt{4x - 1}}$
$= \frac{3x + 2}{(4x - 1)^{\frac{1}{2}}}$
$= \frac{(4x - 1)^{\frac{1}{2}}(3) - (3x + 2)(\frac{1}{2}(4x - 1)^{-\frac{1}{2}}(4))}{4x - 1}$
$= \frac{3\sqrt{4x - 1} - \frac{2(3x + 2)}{\sqrt{4x - 1}}}{4x - 1}$
$= \frac{\frac{3(4x - 1) - 2(3x + 2)}{\sqrt{4x - 1}}}{4x - 1}$
$= \frac{3(4x - 1) - 2(3x + 2)}{(4x - 1)^\frac{3}{2}}$
$= \frac{6x - 7}{(4x - 1)^\frac{3}{2}}$
Second Derivatives
These are the derivatives of the derivatives.
They are useful for finding out the nature of stationary points, where $\frac{dy}{dx} = 0$
The same method works to calculate them.
End of explanation
"""
|
gte620v/PythonTutorialWithJupyter
|
exercises/Ex1-Dice_Simulation_empty.ipynb
|
mit
|
import random
def single_die():
"""Outcome of a single die roll"""
pass
"""
Explanation: Dice Simulaiton
In this excercise, we want to simulate the outcome of rolling dice. We will walk through several levels of building up funcitonality.
Single Die
Let's create a function that will return a random value between one and six that emulates the outcome of the roll of one die. Python has a random number package called random.
End of explanation
"""
for _ in range(50):
print(single_die(),end=' ')
"""
Explanation: Check
To check our function, let's call it 50 times and print the output. We should see numbers between 1 and 6.
End of explanation
"""
def dice_roll(dice_count):
"""Outcome of a rolling dice_count dice
Args:
dice_count (int): number of dice to roll
Returns:
int: sum of face values of dice
"""
pass
"""
Explanation: Multiple Dice Roll
Now let's make a function that returns the sum of N 6-sided dice being rolled.
End of explanation
"""
for _ in range(100):
print(dice_roll(2), end=' ')
"""
Explanation: Check
Let's perform the same check with 100 values and make sure we see values in the range of 2 to 12.
End of explanation
"""
def dice_rolls(dice_count, rolls_count):
"""Return list of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
Returns:
list: list of dice roll values.
"""
pass
print(dice_rolls(2,100))
"""
Explanation: Capture the outcome of multiple rolls
Write a function that will return a list of values for many dice rolls
End of explanation
"""
import pylab as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 4)
def dice_histogram(dice_count, rolls_count, bins):
"""Plots outcome of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
bins (int): number of histogram bins
"""
pass
dice_histogram(2, 10000, 200)
"""
Explanation: Plot Result
Make a function that plots the histogram of the dice values.
End of explanation
"""
dice_histogram(100, 10000, 200)
"""
Explanation: Aside
The outputs follow a binomial distribution. As the number of dice increase, the binomial distribution approaches a Gaussian distribution due to the Central Limit Theorem (CLT). Try making a histogram with 100 dice. The resulting plot is a "Bell Curve" that represents the Gaussian distribution.
End of explanation
"""
import time
start = time.time()
dice_histogram(100, 10000, 200)
print(time.time()-start, 'seconds')
"""
Explanation: Slow?
That seemed slow. How do we time it?
End of explanation
"""
import numpy as np
np.random.randint(1,7,(2,10))
"""
Explanation: Seems like a long time... Can we make it faster? Yes!
Optimize w/ Numpy
Using lots of loops in python is not usually the most efficient way to accomplish numeric tasks. Instead, we should use numpy. With numpy we can "vectorize" operations and under the hood numpy is doing the computation with C code that has a python interface. We don't have to worry about anything under the hood.
2-D Array of Values
Start by checking out numpy's randint function. Let's rewrite dice_rolls using numpy functions and no loops.
To do this, we are going to use np.random.randint to create a 2-D array of random dice rolls. That array will have dice_count rows and rolls_count columns--ie, the size of the array is (dice_count, rolls_count).
End of explanation
"""
np.sum(np.random.randint(1,7,(2,10)),axis=0)
"""
Explanation: The result is a np.array object with is like a list, but better. The most notable difference is that we can to element-wise math operations on numpy arrays easily.
Column sum
To find the roll values, we need to sum up the 2-D array by each column.
End of explanation
"""
def dice_rolls_np(dice_count, rolls_count):
"""Return list of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
Returns:
np.array: list of dice roll values.
"""
pass
print(dice_rolls(2,100))
"""
Explanation: Let's use this knowledge to rewrite dice_rolls
End of explanation
"""
def dice_histogram_np(dice_count, rolls_count, bins):
"""Plots outcome of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
bins (int): number of histogram bins
"""
pass
start = time.time()
dice_histogram_np(100, 10000, 200)
print(time.time()-start, 'seconds')
"""
Explanation: Histogram and timeit
End of explanation
"""
%timeit dice_rolls_np(100, 1000)
%timeit dice_rolls(100, 1000)
"""
Explanation: That is way faster!
%timeit
Jupyter has a magic function to time function execution. Let's try that:
End of explanation
"""
def risk_battle():
"""Risk battle simulation"""
# get array of three dice values
attacking_dice = 0 # fixme
# get array of two dice values
defending_dice = 0 # fixme
# sort both sets and take top two values
attacking_dice_sorted = 0 # fixme
defending_dice_sorted = 0 # fixme
# are the attacking values greater?
# attack_wins = attacking_dice_sorted[:2] > defending_dice_sorted[:2]
# convert boolean values to -1, +1
# attack_wins_pm = attack_wins*2 - 1
# sum up these outcomes
return 0 # fixme
for _ in range(50):
print(risk_battle(), end=' ')
"""
Explanation: The improvement in the core function call was two orders of magnitude, but when we timed it initially, we were also waiting for the plot to render which consumed the majority of the time.
Risk Game Simulation
In risk two players roll dice in each battle to determine how many armies are lost on each side.
Here are the rules:
The attacking player rolls three dice
The defending player rolls two dice
The defending player wins dice ties
The dice are matched in sorted order
The outcome is a measure of the net increase in armies for the the attacking player with values of -2, -1, 0, 1, 2
Let's make a function that simulates the outcome of one Risk battle and outputs the net score.
The functions we created in the first part of this tutorial are not useful for this task.
End of explanation
"""
outcomes = [risk_battle() for _ in range(10000)]
plt.hist(outcomes)
plt.show()
"""
Explanation: Histogram
Let's plot the histogram. Instead of making a function, let's just use list comprehension to make a list and then plot.
End of explanation
"""
np.mean([risk_battle() for _ in range(10000)])
"""
Explanation: Expected Margin
If we run many simulations, how many armies do we expect the attacker to be ahead by on average?
End of explanation
"""
|
afunTW/dsc-crawling
|
appendix_ptt/03_crawl_image.ipynb
|
apache-2.0
|
import requests
import re
import json
import os
from PIL import Image
from bs4 import BeautifulSoup, NavigableString
from pprint import pprint
ARTICLE_URL = 'https://www.ptt.cc/bbs/Gossiping/M.1538373690.A.72D.html'
"""
Explanation: 爬取文章上的內文的所有文章
你有可能會遇到「是否滿18歲」的詢問頁面
解析 ptt.cc/bbs 裏面文章的結構
爬取文章
解析並確認圖片格式
下載圖片
URL https://www.ptt.cc/bbs/Gossiping/M.1538373690.A.72D.html
BACKUP https://afuntw.github.io/Test-Crawling-Website/pages/ptt/M.1538373690.A.72D.html
End of explanation
"""
resp = requests.get(ARTICLE_URL, cookies={'over18': '1'})
assert resp.status_code == 200
soup = BeautifulSoup(resp.text, 'lxml')
main_content = soup.find(id = 'main-content')
img_link = main_content.findAll('a', recursive=False)
pprint(img_link)
"""
Explanation: 爬取文章
End of explanation
"""
def check_and_download_img(url, savedir='download_img'):
image_resp = requests.get(url, stream=True)
image = Image.open(image_resp.raw)
filename = os.path.basename(url)
# check format
real_filename = '{}.{}'.format(
filename.split('.')[0],
image.format.lower()
)
print('check and fixed filename {} -> {}'.format(filename, real_filename))
# download
if not os.path.exists(savedir):
os.makedirs(savedir)
savepath = os.path.join(savedir, real_filename)
image.save(savepath)
print('save imag - {}'.format(savepath))
for tag in img_link:
check_and_download_img(tag['href'])
"""
Explanation: 檢查並下載圖片
End of explanation
"""
|
google/starthinker
|
colabs/ga_timeline.ipynb
|
apache-2.0
|
!pip install git+https://github.com/google/starthinker
"""
Explanation: 1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
"""
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
"""
Explanation: 2. Get Cloud Project ID
To run this recipe requires a Google Cloud Project, this only needs to be done once, then click play.
End of explanation
"""
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
"""
Explanation: 3. Get Client Credentials
To read and write to various endpoints requires downloading client credentials, this only needs to be done once, then click play.
End of explanation
"""
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'account_ids': [],
'dataset': '', # Dataset to be written to in BigQuery.
}
print("Parameters Set To: %s" % FIELDS)
"""
Explanation: 4. Enter Google Analytics Timeline Parameters
Download Google Analytics settings to a BigQuery table.
1. Enter the dateset to which the Google Analytics settings will be downloaded.
1. Add the starthinker service account email to the Google Analytics account(s) in which you are interested.
1. Schedule the recipe to run once a day.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
"""
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'ga_settings_download': {
'description': 'Will create tables with format ga_* to hold each endpoint via a call to the API list function.',
'auth': 'user',
'accounts': {'field': {'name': 'account_ids','kind': 'integer_list','order': 1,'default': []}},
'dataset': {'field': {'name': 'dataset','kind': 'string','order': 2,'default': '','description': 'Dataset to be written to in BigQuery.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
"""
Explanation: 5. Execute Google Analytics Timeline
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation
"""
|
amueller/odscon-sf-2015
|
06 - Working With Text Data.ipynb
|
cc0-1.0
|
#! tar -xf data/aclImdb.tar.bz2 --directory data
from sklearn.datasets import load_files
reviews_train = load_files("data/aclImdb/train/")
text_train, y_train = reviews_train.data, reviews_train.target
print("Number of documents in training data: %d" % len(text_train))
print(np.bincount(y_train))
reviews_test = load_files("data/aclImdb/test/")
text_test, y_test = reviews_test.data, reviews_test.target
print("Number of documents in test data: %d" % len(text_test))
print(np.bincount(y_test))
from IPython.display import HTML
print(text_train[1])
HTML(text_train[1].decode("utf-8"))
print(y_train[1])
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
cv.fit(text_train)
len(cv.vocabulary_)
print(cv.get_feature_names()[:50])
print(cv.get_feature_names()[50000:50050])
X_train = cv.transform(text_train)
X_train
print(text_train[19726])
X_train[19726].nonzero()[1]
X_test = cv.transform(text_test)
from sklearn.svm import LinearSVC
svm = LinearSVC()
svm.fit(X_train, y_train)
svm.score(X_train, y_train)
svm.score(X_test, y_test)
def visualize_coefficients(classifier, feature_names, n_top_features=25):
# get coefficients with large absolute values
coef = classifier.coef_.ravel()
positive_coefficients = np.argsort(coef)[-n_top_features:]
negative_coefficients = np.argsort(coef)[:n_top_features]
interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients])
# plot them
plt.figure(figsize=(15, 5))
colors = ["red" if c < 0 else "blue" for c in coef[interesting_coefficients]]
plt.bar(np.arange(2 * n_top_features), coef[interesting_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.subplots_adjust(bottom=0.3)
plt.xticks(np.arange(1, 1 + 2 * n_top_features), feature_names[interesting_coefficients], rotation=60, ha="right");
visualize_coefficients(svm, cv.get_feature_names())
svm = LinearSVC(C=0.001)
svm.fit(X_train, y_train)
visualize_coefficients(svm, cv.get_feature_names())
from sklearn.pipeline import make_pipeline
text_pipe = make_pipeline(CountVectorizer(), LinearSVC())
text_pipe.fit(text_train, y_train)
text_pipe.score(text_test, y_test)
from sklearn.grid_search import GridSearchCV
import time
start = time.time()
param_grid = {'linearsvc__C': np.logspace(-5, 0, 6)}
grid = GridSearchCV(text_pipe, param_grid, cv=5)
grid.fit(text_train, y_train)
print(time.time() - start)
grid.best_score_
def plot_grid_1d(grid_search_cv, ax=None):
if ax is None:
ax = plt.gca()
if len(grid_search_cv.param_grid.keys()) > 1:
raise ValueError("More then one parameter found. Can't do 1d plot.")
score_means, score_stds = zip(*[(np.mean(score.cv_validation_scores), np.std(score.cv_validation_scores))
for score in grid_search_cv.grid_scores_])
score_means, score_stds = np.array(score_means), np.array(score_stds)
parameters = next(grid_search_cv.param_grid.values().__iter__())
artists = []
artists.extend(ax.plot(score_means))
artists.append(ax.fill_between(range(len(parameters)), score_means - score_stds,
score_means + score_stds, alpha=0.2, color="b"))
ax.set_xticklabels(parameters)
plot_grid_1d(grid)
grid.best_params_
visualize_coefficients(grid.best_estimator_.named_steps['linearsvc'],
grid.best_estimator_.named_steps['countvectorizer'].get_feature_names())
grid.best_score_
grid.score(text_test, y_test)
"""
Explanation: Text Classification of Movie Reviews
Unpack data - this only works on linux and (maybe?) OS X. Unpack using 7zip on Windows.
End of explanation
"""
text_pipe = make_pipeline(CountVectorizer(), LinearSVC())
param_grid = {'linearsvc__C': np.logspace(-3, 2, 6),
"countvectorizer__ngram_range": [(1, 1), (1, 2)]}
grid = GridSearchCV(text_pipe, param_grid, cv=5)
grid.fit(text_train, y_train)
scores = np.array([score.mean_validation_score for score in grid.grid_scores_]).reshape(3, -1)
plt.matshow(scores)
plt.ylabel("n-gram range")
plt.yticks(range(3), param_grid["countvectorizer__ngram_range"])
plt.xlabel("C")
plt.xticks(range(6), param_grid["linearsvc__C"]);
plt.colorbar()
grid.best_params_
visualize_coefficients(grid.best_estimator_.named_steps['linearsvc'],
grid.best_estimator_.named_steps['countvectorizer'].get_feature_names())
grid.score(text_test, y_test)
"""
Explanation: N-Grams
End of explanation
"""
|
ARM-software/lisa
|
ipynb/deprecated/examples/energy_meter/EnergyMeter_Gem5.ipynb
|
apache-2.0
|
from conf import LisaLogging
LisaLogging.setup()
# One initial cell for imports
import json
import logging
import os
from env import TestEnv
# Suport for FTrace events parsing and visualization
import trappy
from trappy.ftrace import FTrace
from trace import Trace
# Support for plotting
# Generate plots inline
%matplotlib inline
import numpy
import pandas as pd
import matplotlib.pyplot as plt
from env import TestEnv
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
"""
Explanation: Energy Meter Examples
Energy estimations in Gem5
The Gem5EnergyMeter in Lisa uses the devlib Gem5PowerInstrument to extract energy information from the gem5 statistics file referenced in the energy meter field of the target configuration.
End of explanation
"""
# Root path of the gem5 workspace
base = "/home/vagrant/gem5"
conf = {
# Only 'linux' is supported by gem5 for now, 'android' is a WIP
"platform" : 'linux',
# Preload settings for a specific target
"board" : 'gem5',
# Devlib modules to load - "gem5stats" is required to use the power instrument
"modules" : ["cpufreq", "bl", "gem5stats"],
# Host that will run the gem5 instance
"host" : "workstation-lin",
"gem5" : {
# System to simulate
"system" : {
# Platform description
"platform" : {
# Gem5 platform description
# LISA will also look for an optional gem5<platform> board file
# located in the same directory as the description file.
"description" : os.path.join(base, "juno.py"),
"args" : [
"--power-model", # Platform-specific parameter enabling power modelling
"--juno-revision 2",
# Resume simulation from a previous checkpoint
# Checkpoint must be taken before Virtio folders are mounted
#"--checkpoint-indir /data/tmp/Results_LISA/gem5",
#"--checkpoint-resume 1",
]
},
# Kernel compiled for gem5 with Virtio flags
"kernel" : os.path.join(base, "product/", "vmlinux"),
# DTB of the system to simulate
"dtb" : os.path.join(base, "product/", "armv8_juno_r2.dtb"),
# Disk of the distrib to run
"disk" : os.path.join(base, "aarch64-ubuntu-trusty-headless.img")
},
# gem5 settings
"simulator" : {
# Path to gem5 binary
"bin" : os.path.join(base, "gem5/build/ARM/gem5.fast"),
# Args to be given to the binary
"args" : [
# Zilch
],
}
},
# Tools required by the experiments
"tools" : ['trace-cmd', 'sysbench', 'rt-app'],
# Output directory on host
"results_dir" : "gem5_res",
# Energy Meters configuration based on Gem5 stats
"emeter" : {
"instrument" : "gem5",
"conf" : {
# Zilch
},
# Each channel here must refer to a specific **power** field in the stats file.
'channel_map' : {
'Core0S' : 'system.cluster0.cores0.power_model.static_power',
'Core0D' : 'system.cluster0.cores0.power_model.dynamic_power',
'Core1S' : 'system.cluster0.cores1.power_model.static_power',
'Core1D' : 'system.cluster0.cores1.power_model.dynamic_power',
'Core2S' : 'system.cluster0.cores2.power_model.static_power',
'Core2D' : 'system.cluster0.cores2.power_model.dynamic_power',
'Core3S' : 'system.cluster0.cores3.power_model.static_power',
'Core3D' : 'system.cluster0.cores3.power_model.dynamic_power',
'Core4S' : 'system.cluster1.cores0.power_model.static_power',
'Core4D' : 'system.cluster1.cores0.power_model.dynamic_power',
'Core5S' : 'system.cluster1.cores1.power_model.static_power',
'Core5D' : 'system.cluster1.cores1.power_model.dynamic_power',
},
},
}
# This can take a lot of time ...
te = TestEnv(conf, wipe=True)
target = te.target
"""
Explanation: Target configuration
The target configuration is used to describe and configure your test environment. You can find more details in examples/utils/testenv_example.ipynb.
End of explanation
"""
# Create and RTApp RAMP tasks
rtapp = RTA(target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp1' : Ramp(
start_pct = 95,
end_pct = 5,
delta_pct = 10,
time_s = 0.1).get(),
'ramp2' : Ramp(
start_pct = 90,
end_pct = 30,
delta_pct = 20,
time_s = 0.2).get(),
})
"""
Explanation: Workload execution
For this example, we will investigate the energy consumption of the target while running a random workload. Our observations will be made using the RT-App decreasing ramp workload defined below. With such a workload, the system load goes from high to low over time. We expect to see a similar pattern in power consumption.
End of explanation
"""
# Start emeters & run workload
te.emeter.reset()
rtapp.run(out_dir=te.res_dir)
nrg_rep = te.emeter.report(te.res_dir)
"""
Explanation: Energy estimation
The gem5 energy meters feature two methods: reset and report.
* The reset method will start sampling the channels defined in the target configuration.
* The report method will stop the sampling and produce a CSV file containing power samples together with a JSON file summarizing the total energy consumption of each channel.
End of explanation
"""
logging.info("Measured channels energy:")
print json.dumps(nrg_rep.channels, indent=4)
logging.info("DataFrame of collected samples (only first 5)")
nrg_rep.data_frame.head()
# Obtain system level energy by ...
df = nrg_rep.data_frame
# ... summing the dynamic power of all cores to obtain total dynamic power, ...
df["total_dynamic"] = df[('system.cluster0.cores0.power_model.dynamic_power', 'power')] + \
df[('system.cluster0.cores1.power_model.dynamic_power', 'power')] + \
df[('system.cluster0.cores2.power_model.dynamic_power', 'power')] + \
df[('system.cluster0.cores3.power_model.dynamic_power', 'power')] + \
df[('system.cluster1.cores0.power_model.dynamic_power', 'power')] + \
df[('system.cluster1.cores1.power_model.dynamic_power', 'power')]
# ... summing the static power of all cores to obtain total static power and ...
df["total_static"] = df[('system.cluster0.cores0.power_model.static_power', 'power')] + \
df[('system.cluster0.cores1.power_model.static_power', 'power')] + \
df[('system.cluster0.cores2.power_model.static_power', 'power')] + \
df[('system.cluster0.cores3.power_model.static_power', 'power')] + \
df[('system.cluster1.cores0.power_model.static_power', 'power')] + \
df[('system.cluster1.cores1.power_model.static_power', 'power')]
# ... summing the dynamic and static powers
df["total"] = df["total_dynamic"] + df["total_static"]
logging.info("Plot of collected power samples")
axes =df[('total')].plot(figsize=(16,8),
drawstyle='steps-post');
axes.set_title('Power samples');
axes.set_xlabel('Time [s]');
axes.set_ylabel('Output power [W]');
"""
Explanation: Data analysis
End of explanation
"""
logging.info("Power distribution")
axes = df[('total')].plot(kind='hist', bins=32,
figsize=(16,8));
axes.set_title('Power Histogram');
axes.set_xlabel('Output power [W] buckets');
axes.set_ylabel('Samples per bucket');
logging.info("Plot of collected power samples")
nrg_rep.data_frame.describe(percentiles=[0.90, 0.95, 0.99]).T
# Don't forget to stop Gem5
target.disconnect()
"""
Explanation: We can see on the above plot that the system level power consumption decreases over time (in average). This is coherent with the expected behaviour given the decreasing ramp workload under consideration.
End of explanation
"""
|
shinys825/lc_project
|
codes/LC_DataFrame(Cleaning)_descrete Variable.ipynb
|
mit
|
lc_data = pd.DataFrame.from_csv('./lc_dataframe(cleaning).csv')
lc_data = lc_data.reset_index()
lc_data.tail()
"""
Explanation: Pandas DataFrame
End of explanation
"""
x = lc_data['grade']
sns.distplot(x, color = 'r')
plt.show()
"""
Explanation: V4 grade (범주형 데이터형)
LC assigned loan grade
A,B,C,D,E,F,G = {1, 2, 3, 4, 5, 6, 7}
End of explanation
"""
x = lc_data['sub_grade']
sns.distplot(x, color = 'g')
plt.show()
"""
Explanation: V5 sub_grade (범주형 데이터형)
LC assigned loan subgrade
1, 2, 3, 4, 5
End of explanation
"""
x = lc_data['emp_title']
plt.hist(x)
plt.show()
"""
Explanation: V6 emp_title (범주형 데이터형)
The job title supplied by the Borrower when applying for the loan.*
True = 1, False = 0
End of explanation
"""
x = lc_data['emp_length']
sns.distplot(x, color = 'r')
plt.show()
"""
Explanation: V7 emp_length (범주형 데이터형)
Employment length in years. Possible values are between 0 and 10 where 0 means less than one year and 10 means ten or more years.
< 1' = 0, 10+ = 10, 'n/a' = 11
End of explanation
"""
x = lc_data['home_ownership']
sns.distplot(x, color = 'g')
plt.show()
"""
Explanation: V8 home_ownership (범주형 데이터형)
The home ownership status provided by the borrower during registration or obtained from the credit report. Our values are: RENT, OWN, MORTGAGE, OTHER
Mortgage, None, Other, Own, Rent = {1, 2, 3, 4, 5}
End of explanation
"""
x = lc_data['verification_status']
sns.distplot(x)
plt.show()
"""
Explanation: V10 verification_status (범주형 데이터형)
Indicates if income was verified by LC, not verified, or if the income source was verified
Source Verified, Verified = 1, Not Verified = 0
End of explanation
"""
x = lc_data['issue_d']
sns.distplot(x, color = 'r')
plt.show()
"""
Explanation: V11 issue_d (범주형 데이터형)
The month which the loan was funded
mm 으로 변환
End of explanation
"""
x = lc_data['purpose']
sns.distplot(x, color = 'g')
plt.show()
"""
Explanation: V14 purpose (범주형 데이터형)
A category provided by the borrower for the loan request.
15개 범주 = {1:15} (1부터 15까지)
End of explanation
"""
x = lc_data['initial_list_status']
plt.hist(x)
plt.show()
"""
Explanation: V23 initial_list_status (범주형 데이터형)
The initial listing status of the loan. Possible values are – W, F
W = 1, F = 0
End of explanation
"""
|
jay-johnson/sci-pype
|
red10/Red10-SPY-Multi-Model-Price-Forecast.ipynb
|
apache-2.0
|
from __future__ import print_function
import sys, os, requests, json, datetime
# Load the environment and login the user
from src.common.load_redten_ipython_env import user_token, user_login, csv_file, run_job, core, api_urls, ppj, rt_url, rt_user, rt_pass, rt_email, lg, good, boom, anmt, mark, ppj, uni_key, rest_login_as_user, rest_full_login, wait_for_job_to_finish, wait_on_job, get_job_analysis, get_job_results, get_analysis_manifest, get_job_cache_manifest, build_prediction_results, build_forecast_results, get_job_cache_manifest, search_ml_jobs, show_logs, show_errors, ipyImage, ipyHTML, ipyDisplay, pd, np
"""
Explanation: Predicting the SPY's Future Closing Price with a Multi-Model Forecast
Creating many machine learning models to predict future price movements from Redis.
How?
Uses pricing metrics (hlocv)
Streamline development and deployment of machine learning forecasts by storing large, pre-trained models living in Redis
Custom rolled dataset (takes about 7 hours per 1 ticker)
Technical indicators
Why?
Took too long to manually rebuild the dataset, and build + tune new models
Improve model accuracy by tracking success (situational/seasonal risks)
Wanted simple, consistent delivery of results
Service layer for abstracting model implementation
Multi-tenant, distributed machine learning cloud
Team needed Jupyter integration
Data security - so it had to run on-premise and cloud
Now it takes 30 minutes to build the dataset and 5 minutes to make new predictions
Sample SPY Multi-Model Forecast
Setup the Environment
Load the shared core, methods, and environment before starting processing
End of explanation
"""
# dataset name is the ticker
ds_name = "SPY"
# Label and description for job
title = str(ds_name) + " Forecast v5 - " + str(uni_key())
desc = "Forecast simulation - " + str(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
# Whats the algorithm model you want to use?
algo_name = "xgb-regressor"
# If your dataset is stored in redis, you can pass in the location
# to the dataset like: <redis endpoint>:<key>
rloc = ""
# If your dataset is stored in S3, you can pass in the location
# to the dataset like: <bucket>:<key>
sloc = ""
# During training what ratio of tests-vs-training do you want to use?
# Trade off smarts vs accuracy...how smart are we going?
test_ratio = 0.1
# Customize dataset samples used during the analysis using json dsl
sample_filter_rules = {}
# What column do you want to predict values?
target_column_name = "FClose"
# What columns can the algorithms use for training and learning?
feature_column_names = [ "FHigh", "FLow", "FOpen", "FClose", "FVolume" ]
# values in the Target Column
target_column_values = [ "GoodBuys", "BadBuys", "Not Finished" ]
# How many units ahead do you want to forecast?
units_ahead_set = [ 5, 10, 15, 20, 25, 30 ]
units_ahead_type = "Days"
# Prune non-int/float columns as needed:
ignore_features = [
"Ticker",
"Date",
"FDate",
"FPrice",
"DcsnDate",
"Decision"
]
# Set up the XGB parameter
# https://github.com/dmlc/xgboost/blob/master/doc/parameter.md
train_xgb = {
"learning_rate" : 0.20,
"num_estimators" : 50,
"sub_sample" : 0.20,
"col_sample_by_tree" : 0.90,
"col_sample_by_level" : 1.0,
"objective" : "reg:linear",
"max_depth" : 3,
"max_delta_step" : 0,
"min_child_weight" : 1,
"reg_alpha" : 0,
"reg_lambda" : 1,
"base_score" : 0.6,
"gamma" : 0,
"seed" : 42,
"silent" : True
}
# Predict new price points during the day
predict_row = {
"High" : 250.82,
"Low" : 245.54,
"Open" : 247.77,
"Close" : 246.24,
"Volume" : 77670266
}
"""
Explanation: Configure the job
End of explanation
"""
job_id = None # on success, this will store the actively running job's id
csv_file = ""
post_data = {
"predict_this_data" : predict_row,
"title" : title,
"desc" : desc,
"ds_name" : ds_name,
"target_column_name" : target_column_name,
"feature_column_names" : feature_column_names,
"ignore_features" : ignore_features,
"csv_file" : csv_file,
"rloc" : rloc,
"sloc" : sloc,
"algo_name" : algo_name,
"test_ratio" : test_ratio,
"target_column_values" : target_column_values,
"label_column_name" : target_column_name,
"prediction_type" : "Forecast",
"ml_type" : "Playbook-UnitsAhead",
"train" : train_xgb,
"tracking_type" : "",
"units_ahead_set" : units_ahead_set,
"units_ahead_type" : units_ahead_type,
"forecast_type" : "ETFPriceForecasting",
"sample_filters" : sample_filter_rules,
"predict_units_back" : 90, # how many days back should the final chart go?
"send_to_email" : [ "jay.p.h.johnson@gmail.com" ] # comma separated list
}
anmt("Running job: " + str(title))
auth_headers = {
"Content-type": "application/json",
"Authorization" : "JWT " + str(user_token)
}
job_response = run_job(post_data=post_data, headers=auth_headers)
if job_response["status"] != "valid":
boom("Forecast job failed with error=" + str(job_response["status"]))
else:
if "id" not in job_response["data"]:
boom("Failed to create new forecast job")
else:
job_id = job_response["data"]["id"]
job_status = job_response["data"]["status"]
lg("Started Forecast job=" + str(job_id) + " with current status=" + str(job_status))
# end of if job was valid or not
"""
Explanation: Start Forecasting
End of explanation
"""
job_data = {}
job_report = {}
# Should hook this up to a randomized image loader...
ipyDisplay(ipyImage(url="https://media.giphy.com/media/l397998l2DT0ogare/giphy.gif"))
job_res = {}
if job_id == None:
boom("Failed to start a new job")
else:
job_res = wait_on_job(job_id)
if job_res["status"] != "SUCCESS":
boom("Job=" + str(job_id) + " failed with status=" + str(job_res["status"]) + " err=" + str(job_res["error"]))
else:
job_data = job_res["record"]
anmt("Job Report:")
lg(ppj(job_data), 5)
# end of waiting
"""
Explanation: Wait for the job to finish
End of explanation
"""
job_report = {}
if job_id == None:
boom("Failed to start a new job")
else:
# Get the analysis, but do not auto-show the plots
job_report = get_job_analysis(job_id, show_plots=False)
if len(job_report) == 0:
boom("Job=" + str(job_id) + " failed")
else:
lg("")
# if the job failed
# end of get job analysis
# Build the forecast accuracy dictionary from the analysis
# and show the forecast dataframes
acc_results = build_forecast_results(job_report)
for col in acc_results:
col_node = acc_results[col]
predictions_df = col_node["predictions_df"]
date_predictions_df = col_node["date_predictions_df"]
train_predictions_df = col_node["train_predictions_df"]
lg("--------------------------------------------------")
# for all columns in the accuracy dictionary:
# successful predictions above 90%...how's that error rate though?
if col_node["accuracy"] > 0.90:
good("Column=" + str(col) + " accuracy=" + str(col_node["accuracy"]) + " mse=" + str(col_node["mse"]) + " num_predictions=" + str(len(col_node["date_predictions_df"].index)))
# successful predictions between 90% and 80%...how's that error rate though?
elif 0.90 > col_node["accuracy"] > 0.80:
lg("Column=" + str(col) + " accuracy=" + str(col_node["accuracy"]) + " mse=" + str(col_node["mse"]) + " num_predictions=" + str(len(col_node["date_predictions_df"].index)))
else:
boom("Column=" + str(col) + " is not very accurate: accuracy=" + str(col_node["accuracy"]) + " mse=" + str(col_node["mse"]) + " num_predictions=" + str(len(col_node["predictions_df"].index)))
# end of header line
# show the timeseries forecast
ipyDisplay(date_predictions_df)
lg("")
# end of showing prediction results
"""
Explanation: Get Forecast Accuracies
End of explanation
"""
job_res = get_job_analysis(job_id, show_plots=True)
"""
Explanation: Get the Analysis Images
End of explanation
"""
user_token = user_login(rt_user, rt_pass, rt_url)
auth_headers = {
"Authorization" : "JWT " + str(user_token)
}
resource_url = rt_url + "/ml/run/"
query_params = {}
post_data = {}
# Get the ML Job
resource_url = rt_url + "/ml/jobs/"
lg("Running Get ML Job url=" + str(resource_url), 6)
get_response = requests.get(resource_url, params=query_params, data=post_data, headers=auth_headers)
if get_response.status_code != 201 and get_response.status_code != 200:
lg("Failed with GET Response Status=" + str(get_response.status_code) + " Reason=" + str(get_response.reason), 0)
lg("Details:\n" + str(get_response.text) + "\n", 0)
else:
lg("SUCCESS - GET Response Status=" + str(get_response.status_code) + " Reason=" + str(get_response.reason)[0:10], 5)
as_json = True
record = {}
if as_json:
record = json.loads(get_response.text)
lg(ppj(record))
# end of post for running an ML Job
"""
Explanation: Get the Recent Machine Learning Jobs
End of explanation
"""
job_manifest = get_job_cache_manifest(job_report)
lg(ppj(job_manifest))
"""
Explanation: Redis Machine Learning Manifest
Jobs use a manifest to prevent concurrent jobs in-flight and models from colliding between users and historical machine learning jobs
A manifest contains:
A dictionary of Redis model locations
S3 archival locations
Tracking data for import and export across environments
Decoupled large model files (8gb files in S3) from the tracking and deployment
End of explanation
"""
|
saashimi/code_guild
|
wk9/notebooks/.ipynb_checkpoints/ch4.What Are We Doing With All These Tests?-checkpoint.ipynb
|
mit
|
%cd ../testing/superlists/
!python3 functional_tests.py
"""
Explanation: Using Selenium to Test User Interactions
Where were we at the end of the last chapter? Let’s rerun the test and find out:
End of explanation
"""
%%writefile functional_tests.py
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import unittest
class NewVisitorTest(unittest.TestCase):
def setUp(self):
self.browser = webdriver.Firefox()
self.browser.implicitly_wait(3)
def tearDown(self):
self.browser.quit()
def test_can_start_a_list_and_retrieve_it_later(self):
# Edith has heard about a cool new online to-do app. She goes
# to check out its homepage
self.browser.get('http://localhost:8000')
# She notices the page title and header mention to-do lists
self.assertIn('To-Do', self.browser.title)
header_text = self.browser.find_element_by_tag_name('h1').text
self.assertIn('To-Do', header_text)
# She is invited to enter a to-do item straight away
inputbox = self.browser.find_element_by_id('id_new_item')
self.assertEqual(
inputbox.get_attribute('placeholder'),
'Enter a to-do item'
)
# She types "Buy peacock feathers" into a text box (Edith's hobby)
# is tying fly-fishing lures)
inputbox.send_keys('Buy peacock feathers')
# When she hits enter, the page updates, and now the page lists
# "1: Buy peacock feathers" as an item in a to-do list table
inputbox.send_keys(Keys.ENTER)
table = self.browser.find_element_by_id('id_list_table')
rows = table.find_elements_by_tag_name('tr')
self.assertTrue(
any(row.text == '1: Buy peacock feathers' for row in rows)
)
# There is still a text box inviting her to add another item. She
# enters "Use peacock feathers to make a fly" (Edith is very
# methodical)
self.fail('Finish the test!')
# The page updates again, and now shows both items on her list
# Edith wonders whether the site will remember her list. Then she sees
# that the site has generated a unique URL for her -- there is some
# explanatory text to that effect.
# She visits that URL - her to-do list is still there.
# Satisfied, she goes back to sleep
if __name__ == '__main__':
unittest.main(warnings='ignore')
"""
Explanation: Did you try it, and get an error saying Problem loading page or Unable to connect? So did I. It’s because we forgot to spin up the dev server first using manage.py runserver. Do that, and you’ll get the failure message we’re after.
One of the great things about TDD is that you never have to worry about forgetting what to do next—just rerun your tests and they will tell you what you need to work on.
“Finish the test”, it says, so let’s do just that! Open up functional_tests.py and we’ll extend our FT:
End of explanation
"""
!python3 functional_tests.py
"""
Explanation: We’re using several of the methods that Selenium provides to examine web pages: find_element_by_tag_name, find_element_by_id, and find_elements_by_tag_name (notice the extra s, which means it will return several elements rather than just one). We also use send_keys, which is Selenium’s way of typing into input elements. You’ll also see the Keys class (don’t forget to import it), which lets us send special keys like Enter, but also modifiers like Ctrl.
Watch out for the difference between the Selenium find_element_by... and find_elements_by... functions. One returns an element, and raises an exception if it can’t find it, whereas the other returns a list, which may be empty.
Also, just look at that any function. It’s a little-known Python built-in. I don’t even need to explain it, do I? Python is such a joy.
Although, if you’re one of my readers who doesn’t know Python, what’s happening inside the any is a generator expression, which is like a list comprehension but awesomer. You need to read up on this. If you Google it, you’ll find Guido himself explaining it nicely. Come back and tell me that’s not pure joy!
Let’s see how it gets on:
End of explanation
"""
!python3 manage.py test
"""
Explanation: Decoding that, the test is saying it can’t find an <h1> element on the page. Let’s see what we can do to add that to the HTML of our home page.
Big changes to a functional test are usually a good thing to commit on their own. I failed to do so in my first draft, and I regretted it later when I changed my mind and had the change mixed up with a bunch of others. The more atomic your commits, the better:
$ git diff # should show changes to functional_tests.py
$ git commit -am "Functional test now checks we can input a to-do item"
The "Don't Test Constants" Rule, and Templates to the Rescue
Let’s take a look at our unit tests, lists/tests.py. Currently we’re looking for specific HTML strings, but that’s not a particularly efficient way of testing HTML. In general, one of the rules of unit testing is Don’t test constants, and testing HTML as text is a lot like testing a constant.
In other words, if you have some code that says:
python
wibble = 3
There’s not much point in a test that says:
from myprogram import wibble
assert wibble == 3
Unit tests are really about testing logic, flow control, and configuration. Making assertions about exactly what sequence of characters we have in our HTML strings isn’t doing that.
What’s more, mangling raw strings in Python really isn’t a great way of dealing with HTML. There’s a much better solution, which is to use templates. Quite apart from anything else, if we can keep HTML to one side in a file whose name ends in .html, we’ll get better syntax highlighting! There are lots of Python templating frameworks out there, and Django has its own which works very well. Let’s use that.
Refactoring to Use a Template
What we want to do now is make our view function return exactly the same HTML, but just using a different process. That’s a refactor—when we try to improve the code without changing its functionality.
That last bit is really important. If you try and add new functionality at the same time as refactoring, you’re much more likely to run into trouble. Refactoring is actually a whole discipline in itself, and it even has a reference book: Martin Fowler’s Refactoring.
The first rule is that you can’t refactor without tests. Thankfully, we’re doing TDD, so we’re way ahead of the game. Let’s check our tests pass; they will be what makes sure that our refactoring is behaviour preserving:
End of explanation
"""
!mkdir lists/templates
%%writefile lists/templates/home.html
<html>
<title>To-Do lists</title>
</html>
"""
Explanation: Great! We’ll start by taking our HTML string and putting it into its own file. Create a directory called lists/templates to keep templates in, and then open a file at lists/templates/home.html, to which we’ll transfer our HTML:
End of explanation
"""
%%writefile lists/views.py
from django.shortcuts import render
def home_page(request):
return render(request, 'home.html')
"""
Explanation: Now to change our views
End of explanation
"""
!python3 manage.py test
"""
Explanation: Instead of building our own HttpResponse, we now use the Django render function. It takes the request as its first parameter (for reasons we’ll go into later) and the name of the template to render. Django will automatically search folders called templates inside any of your apps' directories. Then it builds an HttpResponse for you, based on the content of the template.
Templates are a very powerful feature of Django’s, and their main strength consists of substituting Python variables into HTML text. We’re not using this feature yet, but we will in future chapters. That’s why we use render and (later) render_to_string rather than, say, manually reading the file from disk with the built-in open.
Let's see if it works:
End of explanation
"""
...
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'lists',
)
...
"""
Explanation: Another chance to analyse a traceback:
We start with the error: it can’t find the template.
Then we double-check what test is failing: sure enough, it’s our test of the view HTML.
Then we find the line in our tests that caused the failure: it’s when we call the home_page function.
Finally, we look for the part of our own application code that caused the failure: it’s when we try and call render.
So why can’t Django find the template? It’s right where it’s supposed to be, in the lists/templates folder.
The thing is that we haven’t yet officially registered our lists app with Django. Unfortunately, just running the startapp command and having what is obviously an app in your project folder isn’t quite enough. You have to tell Django that you really mean it, and add it to settings.py as well. Belt and braces. Open it up and look for a variable called INSTALLED_APPS, to which we’ll add lists:
End of explanation
"""
!python3 manage.py test
"""
Explanation: You can see there’s lots of apps already in there by default. We just need to add ours, lists, to the bottom of the list. Don’t forget the trailing comma—it may not be required, but one day you’ll be really annoyed when you forget it and Python concatenates two strings on different lines…
Now we can try running the tests again:
End of explanation
"""
%%writefile lists/tests.py
from django.core.urlresolvers import resolve
from django.test import TestCase
from django.http import HttpRequest
from lists.views import home_page
class HomePageTest(TestCase):
def test_root_url_resolves_to_home_page_view(self):
found = resolve('/')
self.assertEqual(found.func, home_page)
def test_home_page_returns_correct_html(self):
request = HttpRequest()
response = home_page(request)
self.assertTrue(response.content.strip().startswith(b'<html>')) #<---- Fix offending newline here
self.assertIn(b'<title>To-Do lists</title>', response.content)
self.assertTrue(response.content.strip().endswith(b'</html>')) #<---- Fix offending newline here
!python3 manage.py test
"""
Explanation: Darn, not quite.
Depending on whether your text editor insists on adding newlines to the end of files, you may not even see this error. If so, you can safely ignore the next bit, and skip straight to where you can see the listing says OK.
But it did get further! It seems it’s managed to find our template, but the last of the three assertions is failing. Apparently there’s something wrong at the end of the output. I had to do a little print(repr(response.content)) to debug this, but it turns out that the switch to templates has introduced an additional newline (\n) at the end. We can get them to pass like this:
End of explanation
"""
# %load lists/tests.py
from django.core.urlresolvers import resolve
from django.test import TestCase
from django.http import HttpRequest
from django.template.loader import render_to_string
from lists.views import home_page
class HomePageTest(TestCase):
def test_root_url_resolves_to_home_page_view(self):
found = resolve('/')
self.assertEqual(found.func, home_page)
def test_home_page_returns_correct_html(self):
request = HttpRequest()
response = home_page(request)
self.assertTrue(response.content.strip().startswith(b'<html>'))
self.assertIn(b'<title>To-Do lists</title>', response.content)
self.assertTrue(response.content.strip().endswith(b'</html>'))
def test_home_page_returns_correct_html(self):
request = HttpRequest()
response = home_page(request)
expected_html = render_to_string('home.html')
self.assertEqual(response.content.decode(), expected_html)
"""
Explanation: Our refactor of the code is now complete, and the tests mean we’re happy that behaviour is preserved. Now we can change the tests so that they’re no longer testing constants; instead, they should just check that we’re rendering the right template. Another Django helper function called render_to_string is our friend here:
End of explanation
"""
%%writefile lists/templates/home.html
<html>
<head>
<title>To-Do lists</title>
</head>
<body>
<h1>Your To-Do list</h1>
</body>
</html>
"""
Explanation: We use .decode() to convert the response.content bytes into a Python unicode string, which allows us to compare strings with strings, instead of bytes with bytes as we did earlier.
The main point, though, is that instead of testing constants we’re testing our implementation. Great!
Django has a test client with tools for testing templates, which we’ll use in later chapters. For now we’ll use the low-level tools to make sure we’re comfortable with how everything works. No magic!
On Refactoring
That was an absolutely trivial example of refactoring. But, as Kent Beck puts it in Test-Driven Development: By Example, "Am I recommending that you actually work this way? No. I’m recommending that you be able to work this way."
In fact, as I was writing this my first instinct was to dive in and change the test first—make it use the render_to_string function straight away, delete the three superfluous assertions, leaving just a check of the contents against the expected render, and then go ahead and make the code change. But notice how that actually would have left space for me to break things: I could have defined the template as containing any arbitrary string, instead of the string with the right <html> and <title> tags.
When refactoring, work on either the code or the tests, but not both at once.
There’s always a tendency to skip ahead a couple of steps, to make a couple of tweaks to the behaviour while you’re refactoring, but pretty soon you’ve got changes to half a dozen different files, you’ve totally lost track of where you are, and nothing works any more. If you don’t want to end up like Refactoring Cat (below), stick to small steps; keep refactoring and functionality changes entirely separate.
We’ll come across “Refactoring Cat” again during this book, as an example of what happens when we get carried away and want to change too many things at once. Think of it as the little cartoon demon counterpart to the Testing Goat, popping up over your other shoulder and giving you bad advice…
It’s a good idea to do a commit after any refactoring:
$ git status # see tests.py, views.py, settings.py, + new templates folder
$ git add . # will also add the untracked templates folder
$ git diff --staged # review the changes we're about to commit
$ git commit -m "Refactor home page view to use a template"
A Little More of Our Front Page
In the meantime, our functional test is still failing. Let’s now make an actual code change to get it passing. Because our HTML is now in a template, we can feel free to make changes to it, without needing to write any extra unit tests. We wanted an <h1>:
End of explanation
"""
!python3 functional_tests.py
"""
Explanation: Let’s see if our functional test likes it a little better:
End of explanation
"""
|
kambysese/mnefun
|
examples/funloc/mnefun-demo.ipynb
|
bsd-3-clause
|
import mnefun
from score import score
import numpy as np
try:
# Use niprov as handler for events if it's installed
from niprov.mnefunsupport import handler
except ImportError:
handler = None
"""
Explanation: Funloc experiment
The experiment was a simple audio/visual oddball detection task. One
potential purpose would be e.g. functional localization of auditory and
visual cortices.
Imports
import statements find the necessary or useful modules (Python file with some functions or variables in it), load and initialize them if necessary and
define alias(es) in the local namespace for the scope where the statement occurs. Through the import system Python code in one module gains access to the code in another module.
End of explanation
"""
params = mnefun.Params(tmin=-0.2, tmax=0.5, t_adjust=-4e-3,
n_jobs=6, bmin=-0.2, bmax=None,
decim=5, proj_sfreq=200, filter_length='5s')
"""
Explanation: Provenance
Niprov is a python program that uses meta-data to create, store and publish provenance for brain imaging files.
Study parameters
We begin by defining the processing parameters relavent to the study using the mnefun.params class object. We gain access to the variables in params using the (dot) operator.
Note shift+tab invokes module documentation in the notebook.
End of explanation
"""
dir(params)
"""
Explanation: The above statement defines a variable params that is bound to mnefun as class object, inheriting all the attributes and methods associated with that class. To see the attributes of an object in Python you can do...
End of explanation
"""
params.subjects = ['subj_01', 'subj_02']
params.structurals = [None, 'AKCLEE_110_slim'] # None means use sphere
params.dates = [(2014, 2, 14), None] # Use "None" to more fully anonymize
params.subject_indices = [0] # Define which subjects to run
params.plot_drop_logs = True # Turn off so plots do not halt processing
params.on_process = handler # Set the niprov handler to deal with events:
"""
Explanation: For now params is initialized with variable arguments from above along with default arguments for all other variables defined relavent to MEG data preprocessing.
- tmin & tmax define epoching interval
- t_adjust adjusts for delays in the event trigger in units ms
- n_jobs defines number of CPU jobs to use during parallel operations
- bmin & bmax define baseline interval such that (-0.2, None) translates to DC offset correction for the baseline interval during averaging
- decim actors to downsample the data after filtering when epoching data
- filter_length Filter length to use in FIR filtering
- proj_sfreq The sample freq to use for calculating projectors. Useful since
time points are not independent following low-pass. Also saves
computation to downsample
Note To use NVIDIA parallel computing platform (CUDA) use params.n_jobs_fir='CUDA' and params.n_jobs_resample='CUDA' Requires working CUDA development applications and other dependencies. See mne-python installation instructions
for further information.
Otherwise set n_jobs_xxx > 1 to speed up resampling and filtering operations by multi-core parallel processing.
Next we define list variables that determine...
- subjects list of subject identifiers
- structurals list identifers pointing to FreeSurfer subject directory containing MRI data. Here None means missing MRI data, thus inversion operation is done using spherical head model with best-fit sphere aligned with subject's head shape
- dates list of None or arbitrary date values as tuple type used for anonymizing subject's data
All list variables in params have a one-to-one correspondence and are used for indexing purposes, thus
assertion statements are used to check e.g. list lengths are equal.
Subjects
End of explanation
"""
params.acq_ssh = 'kambiz@minea.ilabs.uw.edu' # Should also be "you@minea.ilabs.uw.edu"
# Pass list of paths to search and fetch raw data
params.acq_dir = ['/sinuhe_data01/eric_non_space',
'/data101/eric_non_space',
'/sinuhe/data01/eric_non_space',
'/sinuhe/data02/eric_non_space',
'/sinuhe/data03/eric_non_space']
# Set parameters for remotely connecting to SSS workstation ('sws')
params.sws_ssh = 'kam@kasga.ilabs.uw.edu' # Should also be "you@kasga.ilabs.uw.edu"
params.sws_dir = '/data07/kam/sandbox'
"""
Explanation: Remote connections
Set parameters for remotely connecting to acquisition minea.ilabs.uw.edu and Neuromag processing kasga.ilabs.uw.edu machines.
End of explanation
"""
params.run_names = ['%s_funloc']
params.get_projs_from = np.arange(1)
params.inv_names = ['%s']
params.inv_runs = [np.arange(1)]
params.cov_method = 'shrunk' # Cleaner noise covariance regularization
params.runs_empty = ['%s_erm']
"""
Explanation: File names
Next we define:
- run_names tring identifier used in naming acquisition runs e.g., '%s_funloc' means {str_funloc} where str prefix is the subject ID
- get_projs_from number of acquisition runs to use to build SSP projections for filtered data
- inv_names prefix string to append to inverse operator file(s)
- inv_runs number of acquisition runs to use to build inverse operator for filtered data
- cov_method covariance calculation method
- runs_empty name format of empty room recordings if any
End of explanation
"""
params.reject = dict(grad=3500e-13, mag=4000e-15)
params.flat = dict(grad=1e-13, mag=1e-15)
"""
Explanation: Trial rejection criteria
Use reject and flat dictionaries to pass noisy channel criteria to mne.Epochs during the epoching procedure. The noisy channel criteria are used to reject trials in which any gradiometer, magnetometer, or eeg channel exceeds the given criterion for that channel type, or is flat during the epoching interval.
End of explanation
"""
params.proj_nums = [[1, 1, 0], # ECG
[1, 1, 2], # EOG
[0, 0, 0]] # Continuous (from ERM)
"""
Explanation: Projections
Here we define number of SSP projectors as a list of lists. The individual lists are used to define PCA projections computed for the electric signature from the heart and eyes, and also the ERM noise. Each projections list is a 1-by-3 row vector with columns corresponding to the number of PCA components for Grad/Mag/EEG channel types.
End of explanation
"""
params.sss_type = 'python'
"""
Explanation: SSS Denoising
Next we set up for the SSS filtering method to use either Maxfilter or MNE. Regardless of the argument, in MNEFUN we use default Maxfilter parameter values for SSS. Users should consult the Maxfilter manual or see mne.preprocessing.maxwell_filter for more information on argument values; with the minimal invoke below the default Maxfilter arguments for SSS & tSSS, along with movement compensation is executed.
End of explanation
"""
params.score = score # Scoring function used to slice data into trials
# The scoring function needs to produce an event file with these values
params.in_numbers = [10, 11, 20, 21]
# Those values correspond to real categories as:
params.in_names = ['Auditory/Standard', 'Visual/Standard',
'Auditory/Deviant', 'Visual/Deviant']
"""
Explanation: Recommended SSS denoising arguments for data from children:
sss_regularize = 'svd' # SSS basis regularization type
tsss_dur = 4. # Buffer duration (in seconds) for spatiotemporal SSS/tSSS
int_order = 6 # Order of internal component of spherical expansion
st_correlation = .9 # Correlation limit between inner and outer SSS subspaces
trans_to = (0, 0, .03) # The destination location for the head
Conditioning
End of explanation
"""
# -*- coding: utf-8 -*-
# Copyright (c) 2014, LABS^N
# Distributed under the (new) BSD License. See LICENSE.txt for more info.
"""
----------------
Score experiment
----------------
This sample scoring script shows how to convert the serial binary stamping
from expyfun into meaningful event numbers using mnefun, and then write
out the data to the location mnefun expects.
"""
from __future__ import print_function
import os
import numpy as np
from os import path as op
import mne
from mnefun import extract_expyfun_events
# Original coding used 8XX8 to code event types, here we translate to
# a nicer decimal scheme
_expyfun_dict = {
10: 10, # 8448 (9) + 1 = 10: auditory std, recode as 10
12: 11, # 8488 (11) + 1 = 12: visual std, recode as 11
14: 20, # 8848 (13) + 1 = 14: auditory dev, recode as 20
16: 21, # 8888 (15) + 1 = 16: visual dev, recode as 21
}
def score(p, subjects):
"""Scoring function"""
for subj in subjects:
print(' Running subject %s... ' % subj, end='')
# Figure out what our filenames should be
out_dir = op.join(p.work_dir, subj, p.list_dir)
if not op.isdir(out_dir):
os.mkdir(out_dir)
for run_name in p.run_names:
fname = op.join(p.work_dir, subj, p.raw_dir,
(run_name % subj) + p.raw_fif_tag)
events, presses = extract_expyfun_events(fname)[:2]
for ii in range(len(events)):
events[ii, 2] = _expyfun_dict[events[ii, 2]]
fname_out = op.join(out_dir,
'ALL_' + (run_name % subj) + '-eve.lst')
mne.write_events(fname_out, events)
# get subject performance
devs = (events[:, 2] % 2 == 1)
has_presses = np.array([len(pr) > 0 for pr in presses], bool)
n_devs = np.sum(devs)
hits = np.sum(has_presses[devs])
fas = np.sum(has_presses[~devs])
misses = n_devs - hits
crs = (len(devs) - n_devs) - fas
print('HMFC: %s, %s, %s, %s' % (hits, misses, fas, crs))
# Define how to translate the above event types into evoked files
params.analyses = [
'All',
'AV',
]
params.out_names = [
['All'],
params.in_names,
]
params.out_numbers = [
[1, 1, 1, 1], # Combine all trials
params.in_numbers, # Leave events split the same way they were scored
]
params.must_match = [
[],
[0, 1], # Only ensure the standard event counts match
]
"""
Explanation: Scoring function for MNEFUN example data
If a scoring function i.e., score.py file exists then it must be imported and bound to params.score in order to handle trigger events in the .fif file as desired. The scoring function is used to extract trials from the filtered data. Typically the scoring function uses mne.find_events or mnefun.extract_expyfun_events to find events on the trigger line(s) in the raw .fif file.
End of explanation
"""
mnefun.do_processing(
params,
fetch_raw=True, # Fetch raw recording files from acquisition machine
do_score=False, # Do scoring to slice data into trials
push_raw=False, # Push raw files and SSS script to SSS workstation
do_sss=False, # Run SSS remotely (on sws) or locally with mne-python
fetch_sss=False, # Fetch SSSed files from SSS workstation
do_ch_fix=False, # Fix channel ordering
gen_ssp=False, # Generate SSP vectors
apply_ssp=False, # Apply SSP vectors and filtering
plot_psd=False, # Plot raw data power spectra
write_epochs=False, # Write epochs to disk
gen_covs=False, # Generate covariances
gen_fwd=False, # Generate forward solutions (and src space if needed)
gen_inv=False, # Generate inverses
gen_report=False, # Write mne report html of results to disk
print_status=True, # Print completeness status update
)
"""
Explanation: Execution
Set what processing steps will execute...
End of explanation
"""
|
gansanay/datascience-theoryinpractice
|
statistics-theoryinpractice/01_DiscreteProbabilityDistributions.ipynb
|
mit
|
N = 6
xk = np.arange(1,N+1)
fig, ax = plt.subplots(1, 1)
ax.plot(xk, sps.randint.pmf(xk, xk[0], 1+xk[-1]), 'ro', ms=12, mec='r')
ax.vlines(xk, 0, sps.randint.pmf(xk, xk[0], 1+xk[-1]), colors='r', lw=4)
plt.show()
"""
Explanation: Discrete probability distributions
Rigorous definitions of discrete probability laws and discrete random variables are provided in part 00. By reading this part or from your own education, you should know by now what are the probability mass function and the cumulative distribution function for a discrete probability law.
Bernoulli law
Uniform law
Let $N \in \mathbb{N}$ and $\mathcal{A} = {a_1, \dots, a_N}$ a set of atoms, the uniform discrete law is written as:
\begin{equation}
p({a_i}) = \frac{1}{N}
\end{equation}
For example, with $N=6$, let's take the example of the set of integers ${1,2,3,4,5,6}$:
End of explanation
"""
xk = np.arange(0,N+1)
fig, ax = plt.subplots(1, 1)
for i in np.arange(0,N+1):
y=sps.randint.cdf(i, xk[1], 1+xk[-1])
l = mlines.Line2D([i,i+1], [y,y], color='r')
ax.add_line(l)
ax.plot(xk[1:], sps.randint.cdf(xk[1:], xk[1], 1+xk[-1]), 'ro', ms=12, mec='r')
ax.plot(xk[:-1]+1, sps.randint.cdf(xk[:-1], xk[1], 1+xk[-1]), 'ro', ms=13, mec='r', mfc='r') # Ugly hack to get white circles with red edges
ax.plot(xk[:-1]+1, sps.randint.cdf(xk[:-1], xk[1], 1+xk[-1]), 'ro', ms=12, mec='r', mfc='w') #
ax.set()
plt.show()
"""
Explanation: The cumulative distribution function of a uniform discrete distribution is defined $\forall k \in [a,b]$ as :
$$F(k;a,b) = \frac{\lfloor k \rfloor-a+1}{b-a+1}$$
In our example it looks like this:
End of explanation
"""
|
graemeglass/pandas-intro
|
pandas-intro.ipynb
|
apache-2.0
|
from IPython.display import Image
Image(url='panda1.jpg')
"""
Explanation: Intro to Pandas
http://pandas.pydata.org/
End of explanation
"""
import pandas as pd
# a scalar value
pd.Series(1)
a = pd.Series([1,2,3,6,7,9])
print(a)
# Accessing elements
# Index look up, Element 0th, 1st element
print(a[0])
# Using a mask !We'll be coming back to this, it's a biggy
a[a > 6]
a = pd.Series(range(4), index=('a', 'b', 'c', 'd'))
print(a)
a['c']
data = {'a' : 0., 'b' : 1., 'c' : 2.}
a = pd.Series(data)
print(a)
print(a.b)
print(a['c'])
print(a[1:])
#Why no error? I thought series where 'homogeneous'
pd.Series(['1', 3, 'c'])
"""
Explanation: Contents
What is Pandas
Why Pandas
Datatypes
Getting data into Pandas
Merging dataframes
Getting subsets of your data (slicing, etc)
Plotting
What is Pandas
"...fast, easy-to-use data wrangling and statistical computing tool..."
I like to think of it as a dict like object that can queried in additional SQL-y kind of way.
A bit like a Django Model instance.
Created by Wes McKinney in 2007.
Built on top of NumPy
DataFrame heavily influenced by R DataFrame
Why Pandas
Because R
Datatypes
Series: 1D labeled homogeneously-typed array [1,2,3,4,5]
DataFrame: General 2D labeled, size-mutable tabular structure with potentially heterogeneously-typed columns
Panel: General 3D labeled, also size-mutable array (not going to be covered in this talk)
'...The best way to think about the pandas data structures is as flexible containers for lower dimensional data. For example, DataFrame is a container for Series, and Panel is a container for DataFrame objects. We would like to be able to insert and remove objects from these containers in a dictionary-like fashion.'
Series
End of explanation
"""
import numpy as np
a = pd.Series(np.random.randn(5))
print(a)
a.sum()
a.median()
a.count()
"""
Explanation: They are :-) Look at the dtype, it's all the same type, an object type
Pandas is built on top of numpy, we should look into that...
End of explanation
"""
a.append(3) # Error
"""
Explanation: Series acts very similarly to a ndarray, and is a valid argument to most NumPy functions
End of explanation
"""
a = a.append(pd.Series([99,100,22], index=('what', 'the', 'magic')))
a
a.magic
a.index
"""
Explanation: Told you they were homogeneous
End of explanation
"""
users = [(1, "Jean-Luc", "Picard", "Enterprise", "locutus.2366@enterprise.subspace"),
(2, "Geordi", "La Forge", "Enterprise", "reading.rainbow@enterprise.subspace"),
(3, "Kathryn", "Janeway", "Voyager", "cap.delta.q@voyager.subspace"),
(4, "B'Elanna", "Torres", "Voyager", "warp.drives.rule@voyager.subspace"),
(5, "Benjamin", "Sisko", "DS9", "shut.up.quark@ds9.subspace"),
(6, "Kira", "Nerys", "DS9", "cardassian.scum@ds9.subspace")
]
users = pd.DataFrame(users, columns=['id', 'first_name', 'last_name', 'ship', 'email'])
users
users.set_index('id')
import csv
from uuid import uuid4
from datetime import datetime, timedelta
from random import randrange, choice, randint
def random_date(start, end):
"""
http://stackoverflow.com/a/553448
This function will return a random datetime between two datetime
objects.
"""
delta = end - start
int_delta = (delta.days * 24 * 60 * 60) + delta.seconds
random_second = randrange(int_delta)
return start + timedelta(seconds=random_second)
_end_date = datetime.now()
_start_date = _end_date - timedelta(weeks=300)
available_numbers = ['55500001', '55500002', '55500003', '55500004', '55500005',
'55500006', '55500007']
with open('cdrs.csv', 'w', newline='') as csvfile:
cdrwriter = csv.writer(csvfile, delimiter=',')
cdrwriter.writerow('id,uuid,caller_number,destination_number,context,start_stamp,end_stamp,duration,billsec,hangup_cause,accountcode,read_codec,user_id'.split(','))
for _cdr_count in range(1, 20001):
_cdr_start_dtstamp = random_date(_start_date, _end_date)
_call_duration = randint(0, 1000)
billsec = _call_duration - 5 if (_call_duration - 5) >= 0 else 0
if billsec == 0:
hangup_cause = 'No Anwser'
else:
hangup_cause = 'Anwser'
_cdr_end_dtstamp = _cdr_start_dtstamp + timedelta(seconds=_call_duration)
user_id = randint(1,6)
cdrwriter.writerow([_cdr_count, uuid4(), choice(available_numbers), choice(available_numbers), 'FUTILE', _cdr_start_dtstamp, _cdr_end_dtstamp, _call_duration, billsec, hangup_cause, 'Blah', 'Whoop', user_id])
import pandas as pd
_cdr = pd.read_csv('cdrs.csv')
_cdr.head(3)
pd.set_option('display.max_columns', 1)
_cdr.head(2)
# default is 20, back to default
pd.set_option('display.max_columns', 20)
_cdr.head(1)
users
_cdr = pd.read_csv('cdrs.csv', usecols=['caller_number', 'destination_number', 'start_stamp' ,'end_stamp', 'duration',
'billsec', 'hangup_cause', 'user_id'],
dtype={'caller_number': 'int', 'destination_number': 'str'},
index_col='start_stamp',
parse_dates=['start_stamp', 'end_stamp'])
_cdr.head(3)
_cdr.info()
"""
Explanation: Getting data into Pandas
read_csv
read_excel
read_hdf
read_sql
read_json
read_html
read_stata
read_sas
read_clipboard
read_pickle
DataFrame
End of explanation
"""
_cdr = _cdr.merge(users, how='inner', left_on='user_id', right_on='id')
_cdr.head(5)
"""
Explanation: Merging dataframes
Merge
left
right
outer
inner
Concatenate
Join
http://pandas.pydata.org/pandas-docs/stable/merging.html
End of explanation
"""
type(_cdr['user_id'])
type(_cdr['user_id'][0])
_cdr['user_id'][0]
_cdr[:12]['billsec']
billsec = _cdr['billsec']
print(type(billsec))
billsec.mean()
billsec.mean() == _cdr['billsec'].mean()
_cdr['caller_number'].value_counts()
_cdr['hangup_cause'].unique()
_cdr.groupby('hangup_cause').count()
_cdr.groupby('hangup_cause').user_id.count()
_cdr['email'] == 'shut.up.quark@ds9.subspace'
ben_cdrs = _cdr[_cdr['email'] == 'shut.up.quark@ds9.subspace']
ben_cdrs.head(2)
ben_cdrs.caller_number.count()
_cdr[(_cdr['email'] == 'shut.up.quark@ds9.subspace') & (_cdr['hangup_cause'] == 'Anwser')].caller_number.count()
_cdr[(_cdr['email'] == 'shut.up.quark@ds9.subspace') & (_cdr['hangup_cause'] == 'No Anwser')].caller_number.count()
_cdr[(_cdr['email'] != 'shut.up.quark@ds9.subspace') & (_cdr['user_id'] != 6 )].caller_number.count()
_cdr.groupby('caller_number').count().sort_values('destination_number', ascending=False).head(3)
"""
Explanation: Getting subsets of your data (slicing, etc)
End of explanation
"""
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
billsec = _cdr[:30]['billsec']
billsec.plot(kind='bar')
_cdr[_cdr['email'] == 'reading.rainbow@enterprise.subspace'][:10].billsec.plot(kind='barh')
_cdr['user_id'].value_counts().plot()
_cdr['user_id'].value_counts().plot(kind='pie')
_cdr.count()
"""
Explanation: Plotting
End of explanation
"""
|
radicalrafi/radicalrafi.github.io
|
posts/Deep_Learning_Scribbling_I_Deep_Learning_Frameworks.ipynb
|
mit
|
# KERAS SEQUENTIAL EXAMPLE FOR CLASSIFICATIOn
from keras import models,layers,datasets
(x_train,y_train),(x_test,y_test) = datasets.mnist.load_data()
import numpy as np
from keras import utils
y_test = utils.to_categorical(y_test)
y_train = utils.to_categorical(y_train)
# NORMALIZE THE DATA
def normalizer(x):
x = x.reshape((x.shape[0],28*28))
x = x.astype('float32')
x -= x.mean()
x /= x.std()
return x
x_train = normalizer(x_train)
x_test = normalizer(x_test)
# BUILD AN MLP BY STACKING LAYER OVER LAYER
model = models.Sequential()
model.add(layers.Dense(512,input_shape=(28*28,)))
model.add(layers.Dense(396,activation="relu"))
model.add(layers.Dense(256,activation="relu"))
model.add(layers.Dense(128,activation="elu"))
model.add(layers.Dense(64,activation="elu"))
model.add(layers.Dense(32,activation="elu"))
model.add(layers.Dense(10,activation="softmax"))
# COMPILE THE MODEL
model.compile(optimizer="adam",loss="categorical_crossentropy",metrics=["accuracy"])
# FIT THE MODEL
model.fit(x_train,y_train,epochs=10,batch_size=32,validation_data=(x_test,y_test))
"""
Explanation: This is a series of posts about deep learning, not how to classify Fashion MNIST but more on how to use the science and it's tools . I will discuss Frameworks, Architecturing, Solving Problems and a Bunch of flash notes for things that we forget about , alas we are not machines .
Deep Learning Frameworks
Deep Learning frameworks are quite interesting, because they require a very hardcore feat of engineering in between providing cross-platform software, ultra-fast computation, numerical correction and most of all a PYTHON interface .
They come in two varieties Dynamic and Static . I like to separate them using another metric UX there are those that can be used , and those that can't be used . Usage = (Time to Solve) - (Time to Fight Tool) .
Away from this comedy of emacs vs vim as it's all about user preference (but really tensorflow ?) Let's try to decipher how the framework is built how can it be used and how to go from architecture to code .
P.S : I didn't mention Keras because Keras is actually the knife compared to Tensorflow the rusty chainsaw or PyTorch the scalpel .
What is Deep Learning in 1 Line .
Deep Learning is trying to approximate an unknown function using a set of examples .
In More Lines
Learning Representations
Deep Learning is a model that approximates any function by learning representations of the functions and trying to generalize from . Learning Representations is what happens when layers or neurons weights are optimized . The Linear Operation $ Y = WX+b$ is a way to learn linear relations where as by introducing a new component the activation function
that introduces non linearities to the learning process , non-linear as $ Y = Z(WX+b) = max(0,W*X+b) $ .
Representations a/k/a Features are the characteristics that describe your input data , features are essentially Random Variables , engineering features means constructing meaningful characteristics for your inputs . Good Features are essential for a successful and easier learning t , Deep Neural Networks have this ability to learn good features by training , the subsequent stacking of layers is like a representation filter that tries to learn good representations to give to the output layer for example softmax that act as a classifier .
The hidden layers act slike a feature engineering pipeline that does what used to be a manual domain-driven task automatically and maybe better (ConvNets) .
Layers such as the Convolutional Layer are efficient representation learner that learns small patterns in parts of images . Images are Tensors mathematically but more importantly Images have a visual structure a Flat Image would be hard to understand where as a Normal Image can tell a 1000 words . The Convolution operation that essentially scans a tensor and multiplies by a Filter or Kernel will learn a specific representation from each part and also keeps the structure of the image intact , * Nose is in the center * becomes a learnable representation (more on this another time )
N.B :
* A Note
Getting Started with Keras
Keras is a modular wrapper around Tensorflow it's the actual reason Tensorflow is used by so many people* . Keras let's you build models brick by brick in the literal sense (Sequential) or by Tele-Kinesis (Functional API <3 ) .
Keras provides you with a highly friendly API to turn any architecture you have in mind to code and train and test it at the same time .
Blocks
Deep Learning requires some tools , first you have to design the network architecture whether you
are using a Fully-Connected-Layer or a series of Conv -> MaxPool you need to have in mind a way to approach the problem . Generally as a rule of thumb we have these heuristics :
CNNs for Images
RNN, 1D-CNNs for Text
Boosting, Random Forests or Categorical Embeddings or Wide & Deep for Structured Data
VAE, GAN , {xyz}-GAN for Generative
N.B : *
Tensorflow is the default Keras Backend
Next you need to preprocess your data , as you may know NNs love (0,1) values so you will have
to often standarize your dataset .
You'll also need to understand Loss Functions yes because you see different problems need different loss functions and the choice of your loss function may affect your convergence .
And a GPU or even better a colab
I love Google
At this point you are ready to train your neural network and watch it reach 99% accuracy on MNIST .
End of explanation
"""
# http://pytorch.org/
from os import path
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.0-{platform}-linux_x86_64.whl torchvision
import torch
import torch.nn as nn
import torch.nn.functional as F
class NeuralNet(nn.Module):
def __init__(self):
super(NeuralNet,self).__init__()
# Linear is the affine transformation y = w*x + b
self.fc1 = nn.Linear(784,512)
self.fc2 = nn.Linear(512,396)
self.fc3 = nn.Linear(396,256)
self.fc4 = nn.Linear(256,128)
self.fc5 = nn.Linear(128,64)
self.fc6 = nn.Linear(64,32)
self.fc7 = nn.Linear(32,10)
def forward(self,x):
# The forward pass is what happens from each layer to layer in other words
# the flow of inputs trough the network
# here we describe the activation and maxpooling operations ...
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
x = F.elu(self.fc5(x))
x = F.elu(self.fc6(x))
x = self.fc7(x)
return x
net = NeuralNet()
print(net)
"""
Explanation: A Look Into PyTorch
Models as Code
PyTorch is essentially a library to build and train deep neural nets and also serves as a Numpy on GPU library .
PyTorch gives you modules (optim,nn,torchvision) that can be used collectively to write your model as code and computations will be done dynamically (no model compilation like in Tensorflow)
Let me show you an example similar to what we just did with Keras
End of explanation
"""
from torchvision import datasets, transforms, utils
normalizer = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,), (1.0,))])
# load dataset
train_set = datasets.MNIST(root='./data', train=True, transform=normalizer, download=True)
test_set = datasets.MNIST(root='./data', train=False, transform=normalizer, download=True)
batch_size = 100
train_loader = torch.utils.data.DataLoader(
dataset=train_set,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(
dataset=test_set,
batch_size=batch_size,
shuffle=False)
"""
Explanation: Now we built a class NeuralNetwork with a specific architecture as you may notice
you can create a NeuralNetwork factory that generates different models based on inputs but let's keep that for later .
Let's define a train function that trains a Neural Network
P.S : PyTorch deduces and executes the backward pass (backpropagation) from the forward operation .
End of explanation
"""
# loss function Categorical Cross Entropy
criterion = nn.CrossEntropyLoss()
# optimizer
optimizer = torch.optim.SGD(net.parameters(),lr=0.01,momentum=0.9)
def train(epochs):
for epoch in range(epochs):
# trainning
loss = 0
for batch_idx, (x, y) in enumerate(train_loader):
optimizer.zero_grad()
x = x.view(-1, 28*28)
x = torch.autograd.Variable(x)
y = torch.autograd.Variable(y)
out = net(x)
loss = criterion(out, y)
loss = loss * 0.9 + loss.data[0] * 0.1
loss.backward()
optimizer.step()
if (batch_idx+1) % 100 == 0 or (batch_idx+1) == len(train_loader):
print('==>>> epoch: {} , loss : {}'.format(epoch,loss))
train(10)
def test(epoch):
net.eval()
test_loss = 0
correct = 0
acc_history = []
for data, target in test_loader:
data = data.view(-1,28*28) # view is equivalent to np.reshape
data = torch.autograd.Variable(data, volatile=True)
target = torch.autograd.Variable(target)
output = net(data)
test_loss += F.nll_loss(output, target).data[0]
pred = output.data.max(1)[1] # get the index of the max log-probability
correct += pred.eq(target.data).cpu().sum()
test_loss = test_loss
test_loss /= len(test_loader) # loss function already averages over batch size
accuracy = 100. * correct / len(test_loader.dataset)
acc_history.append(accuracy)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
accuracy))
test(10)
"""
Explanation: Now that our data loaders (PyTorch data generators) are created we can move to implement
a train function this could've been a method for the Neural Network defined above or just a function like we did now .
End of explanation
"""
|
PythonFreeCourse/Notebooks
|
week01/4_Variables.ipynb
|
mit
|
print(3.14159265358 * 5 * 5 * 2)
"""
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
<p style="align: right; direction: rtl; float: right;">משתנים</p>
<p style="text-align: right; direction: rtl; float: right;">
בואו נחשב את נפח הפיצה שלנו לפי אורך המשולש ועוביו.<br>
הנוסחה לחישוב נפח פיצה היא:
<span style="display: inline-flex; direction: ltr;">$pi \times z \times z \times a$</span>,
כאשר <span style="display: inline-flex; direction: ltr;">$z$</span> הוא אורך המשולש ו־<span style="display: inline-flex; direction: ltr;">$a$</span> הוא עובי המשולש.<br>
pi, או <span style="display: inline-flex; direction: ltr;">$\pi$</span>, הוא מספר קבוע שנקרא בעברית <dfn>פאי</dfn>, וערכו קרוב ל־3.141592653589793. בפועל יש לו אין־סוף ספרות אחרי הנקודה העשרונית, אבל אנחנו נסתפק באלו.<br>
נניח שאורכו של המשולש הוא 5 מטרים, ועוביו הוא 2 מטרים (זו למעשה עוגת פיצה. אנחנו אוהבים!)<br>
נחשב:
</p>
End of explanation
"""
pi = 3.14159265358
print(pi * 5 * 5 * 2)
"""
Explanation: <img src="images/pizza_cake.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="פיצה עבה מאוד, עם הרבה גבינה ותוספות של עגבנייה, זיתים וגמבה." width="40%">
<p style="text-align: right; direction: rtl; float: right;">
אך <strong>אבוי</strong>!<br>
מתכנת אחר שיקרא את הקוד שלכם, עלול להתבלבל מכמות המספרים הלא מובנים בעליל שכתובים שם, וקרוב לוודאי שהוא לא יבין מה הם אומרים.<br>
יותר מזה, אם תרצו לחשב את גודלן של פיצות רבות נוספות, תצטרכו לכתוב את פאי המסורבל (סליחה פאי) פעמים רבות בקוד.
</p>
<p style="align: right; direction: rtl; float: right;">השמה</p>
<p style="text-align: right; direction: rtl; float: right;">
למזלנו, בפייתון יש דרך לתת לערכים שם, ממש כמו תווית שכזו. ערכים עם שם נקראים <dfn>משתנים</dfn>, ויש להם יתרונות רבים נוספים שנגלה בהמשך.<br>
כעת נדגים כיצד אנחנו נותנים לערך "פאי" שם, ואיך מייד לאחר מכן אנחנו משתמשים בו.
</p>
End of explanation
"""
pi = 3.14159265358
z = 5
a = 2
print(pi * z * z * a)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right;">
תחילה נשים לב לכך שאף שהשתמשנו בסימן <code>=</code>, השורה הראשונה היא לא שוויון מהסוג שאנחנו רגילים אליו.<br>
משמעות הסימן בתכנות שונה לחלוטין, והעיקרון שאותו הוא מממש נקרא <dfn>השמה</dfn>.<br>
<mark>בהשמה אנחנו שמים את הערך שנמצא בצד ימין של השווה, בתוך משתנה ששמו נכתב בצד שמאל של השווה.</mark><br>
אחרי שביצענו את הפעולה הזו, בכל פעם שנכתוב את שמו של המשתנה, פייתון תבין את מה שכתוב שם <em>כאילו</em> רשמנו את הערך שנמצא בתוכו.
</p>
<p style="text-align: right; direction: rtl; float: right;">
ניצור משתנים גם עבור שאר הערכים:
</p>
End of explanation
"""
pi = 3.14159265358
length = 5 # אורך
thickness = 2 # עובי
pizza_volume = pi * length * length * thickness # נפח הפיצה
print(pizza_volume)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right;">
או אם בא לנו להיות אפילו מובנים יותר, ניתן למשתנים שמות ברורים:
</p>
End of explanation
"""
a = 5
MyBirthday = '12/07/1995'
mybirthday = '12/07/1995'
pizza_radius = 5.3
pizza = 3
pizza radius = 2.7
pizza's_price = 2
5_pizza_price = 30
my_name = "Yam Mesicka"
your_name = '4Tomer Gavish'
"""
Explanation: <p style="text-align: right; direction: rtl; float: right;">
אם נרצה לתאר את הקוד שלנו במילים, נוכל להגיד שיצרנו שלושה משתנים שאליהם השמנו את הנתונים הדרושים לנו לחישוב נפח הפיצה.<br>
בסוף, חישבנו את נפח הפיצה, ואת התוצאה השמנו למשתנה נוסף בשם <var>pizza_volume</var>.<br>
את הערך הנמצא במשתנה הזה הדפסנו למסך.
</p>
<p style="align: right; direction: rtl; float: right;">מונחים</p>
<dl style="text-align: right; direction: rtl; float: right; white-space: nowrap;">
<dt>אופרטור ההשמה</dt><dd>סתם שם מפחיד שמתאר את הסימן <code>=</code> כשרוצים לבצע השמה.</dd>
<dt>שם המשתנה</dt><dd>מופיע משמאל לאופרטור ההשמה.</dd>
<dt>ערך המשתנה</dt><dd>התוכן (הערך) של המשתנה שבסופו של דבר המחשב יזכור. מופיע מימין לאופרטור ההשמה.</dd>
</dl>
<p style="align: right; direction: rtl; float: right;">איך משתנים עובדים?</p>
<p style="text-align: right; direction: rtl; float: right;">
אפשר לדמיין משתנים כמצביע לייזר קטן.<br>
כשאתם מבצעים <em>השמה</em>, אתם מבקשים מפייתון ליצור לייזר בשם שבחרתם, ולהצביע בעזרתו על ערך מסוים.<br>
נניח, במקרה שבו <code dir="ltr" style="direction: ltr;">pi = 3.14</code>, אנחנו מבקשים מפייתון ליצור לייזר בשם <var>pi</var> שיצביע על הערך <samp>3.14</samp>.<br>
בכל פעם שתציינו בהמשך הקוד את שם הלייזר, פייתון תבדוק להיכן הוא מצביע, ותיקח את הערך שנמצא שם.<br>
אם כך, לצורך האנלוגיה הזו, הלייזר הוא <em>שם המשתנה</em>, שמצביע על <em>ערך המשתנה</em>.
</p>
<figure>
<img src="images/laser_variables.svg" style="display: block; margin-left: auto; margin-right: auto;" width="200px" alt="איור להמחשה של שלושה לייזרים שעליהם מודבקות התוויות pi, length ו־thick. מכל אחד מהלייזרים יוצאת קרן לייזר אדומה, שמצביעה (בהתאמה) על המספרים 3.14, 5 ו־2.">
<figcaption style="text-align: center; direction: rtl;">המחשה של שלושה לייזרים שמצביעים על משתנים.</figcaption>
</figure>
<p style="align: right; direction: rtl; float: right;">חוקים לשמות משתנים</p>
<ol style="text-align: right; direction: rtl; float: right; white-space: nowrap;">
<li style="white-space: nowrap;"><strong>חוק 1:</strong> שם משתנה יכול לכלול רק ספרות (<code>0–9</code>), אותיות לטיניות גדולות (<code>A-Z</code>) או קטנות (<code>a-z</code>) וקו תחתון (<code>_</code>).</li>
<li style="white-space: nowrap;"><strong>חוק 2:</strong> שם משתנה לא יכול להתחיל בספרה.</li>
<li style="white-space: nowrap;"><strong>מוסכמה 1:</strong> נהוג ששם משתנה יהיה באותיות קטנות, ומילים יופרדו בקווים תחתונים.</li>
<li style="white-space: nowrap;"><strong>מוסכמה 2:</strong> נהוג לתת למשתנה שם שמתאר היטב את תפקידו בקוד.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right;">
המוסכמה הראשונה נכונה ברוב המקרים, אך לא בכולם. בעתיד ניכנס לעובי הקורה בכל הנוגע לכתיבת קוד בהתאם למוסכמות בעולם התכנות.
</p>
<p style="align: right; direction: rtl; float: right;">תרגול שמות משתנים</p>
<p style="align: right; direction: rtl; float: right;">בדקו האם שם המשתנה עומד בחוקים ובמוסכמות</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עבור כל אחד משמות המשתנים שלפניכם, רשמו במחברת או בהערה האם שם המשתנה עומד בחוקים ובמוסכמות.<br>
אחרי שרשמתם בצד, תקנו את שמות המשתנים הבעיתיים, וודאו שהרצת התא לא תגרום לשגיאה.
</p>
</div>
</div>
End of explanation
"""
my_age = 24
my_age = my_age + 1
print(my_age)
"""
Explanation: <span style="align: right; direction: rtl; float: right; clear: both">תנו שם למשתנים הבאים:</span>
<ol style="text-align: right; direction: rtl; float: right; white-space: nowrap;">
<li>השם של כותב הספר.</li>
<li>הגיל של המשתמש בתוכנית.</li>
<li>המלל במחברת הזו.</li>
<li>מספר הקילומטרים מתל אביב לניו יורק.</li>
</ol>
<p style="align: right; direction: rtl; float: right;">עריכת ערכי משתנים</p>
<p style="text-align: right; direction: rtl; float: right;">
בתחילת המחברת ביצענו כמה השמות למשתנים. לעיתים קרובות, נרצה <em>לערוך</em> את התוכן של המשתנה.<br>
בואו נראה דוגמה:
</p>
End of explanation
"""
|
xpharry/Udacity-DLFoudation
|
tutorials/sentiment_network/Sentiment Classification - Mini Project 2.ipynb
|
mit
|
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory
End of explanation
"""
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
"""
Explanation: Project 1: Quick Theory Validation
End of explanation
"""
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
"""
Explanation: Transforming Text into Numbers
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/nerc/cmip6/models/sandbox-1/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-1', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
Kismuz/btgym
|
examples/data_domain_api_intro.ipynb
|
lgpl-3.0
|
# Make data domain - top-level data structure:
domain = BTgymRandomDataDomain(
filename='../examples/data/DAT_ASCII_EURUSD_M1_2016.csv',
target_period={'days': 50, 'hours': 0, 'minutes': 0}, # use last 50 days of one year data as 'target domain'
# so we get [360 - holidays gaps - 50]
# days of train data (exclude holidays)
trial_params=dict(
start_weekdays={0, 1, 2, 3, 4, 5, 6},
sample_duration={'days': 30, 'hours': 0, 'minutes': 0}, # let each trial be 10 days long
start_00=True, # ajust trial beginning to the beginning of the day
time_gap={'days': 15, 'hours': 0}, # tolerance param
test_period={'days': 6, 'hours': 0, 'minutes': 0}, # from those 10 reserve last 2 days for trial test data
),
episode_params=dict(
start_weekdays={0, 1, 2, 3, 4, 5, 6},
sample_duration={'days': 0, 'hours': 23, 'minutes': 55}, # make every episode duration be 23:55
start_00=False, # do not ajust beginning time
time_gap={'days': 0, 'hours': 10},
),
log_level=INFO, # Set to DEBUG to see more output
)
"""
Explanation: Motivation
BTGym data is basically discrete timeflow of equitype records. For the sake of
defining episodic MDP over such data and setting formal problem objective it should
be somehow structured.
From the other hand, all train data can laso be seen as big external replay memory. Giving algorithms some degree of control over data sampling parameters can be beneficial for overal performance.
The idea is to pass all desirable sample properties as kwargs to env.rest() method.
This notebook is brief introduction to API realisation of formal definitions introduced in Section 1 (Data) in this draft: https://github.com/Kismuz/btgym/blob/master/docs/papers/btgym_formalism_draft.pdf
objects described here can be thought as nested data containers with bult-in properties like sampling and splitting data to train and test subsets.
End of explanation
"""
domain.reset()
"""
Explanation: Here Domain instanse is defined such as:
- Holds one yera of 1min bars data;
- Splits data to source and target domains, target domain data gets last 29 days of year period;
- Defines each Trial to consist of 8 days of train and following 2 days of test data;
- Each Episode lasts maximum 23:55.
Sampling parameters
For convinience currently implemented control options are referenced in btgym.datafeed.base.DataSampleConfig:
DataSampleConfig = dict(
get_new=True,
sample_type=0,
b_alpha=1,
b_beta=1
)
...which simply mirrors base data sample() kwargs:
Args:
get_new (bool): sample new (True) or reuse (False) last made sample;
sample_type (int or bool): 0 (train) or 1 (test) - get sample from train or test data subsets
respectively.
b_alpha (float): beta-distribution sampling alpha > 0, valid for train episodes.
b_beta (float): beta-distribution sampling beta > 0, valid for train episodes.
environment reset kwargs are referenced in btgym.datafeed.base.EnvResetConfig:
EnvResetConfig = dict(
episode_config=DataSampleConfig,
trial_config=DataSampleConfig,
)
Sampling cycle
Reset domain which mean load and describe data and reset counters in case of stateful classes such as BTgymSequentialDataDomain:
End of explanation
"""
trial= domain.sample(
get_new=True,
sample_type=0,
b_alpha=1,
b_beta=1
)
trial.reset()
"""
Explanation: Sample trial from source domain:
End of explanation
"""
episode = trial.sample(
get_new=True,
sample_type=1,
b_alpha=1,
b_beta=1
)
"""
Explanation: Here new Trial object from Source(or train) domaini have been requested as uniform sample from entire source data interval
If target period duration is set to 0:0:0, trying to get target sample will rise an exeption.
Need to reset before sampling as well:
Implementatdion detail:
during real BTgym operation domain instance is held by btgym_data_server;
Different Trial samples are sent to every environment instance, so corresponding btgym_server can sample multiple episodes 'in place'.
Sample episode from trial test interval:
End of explanation
"""
data_feed = episode.to_btfeed()
"""
Explanation: Episodes can be sampled from Trial train or test subsets, just like Trials from Source/Target domains. Here new Episode from test subset of the Trial is requested. Since it is test subset, alpha and beta params doesnt count - it is always uniform (just a common sense heuristic, one can always change it)
Convert to bt.feed object:
End of explanation
"""
print('Got instance of: {}\nholding data: {}\nmetadata: {}'.
format(type(domain), domain.filename, domain.metadata))
print(' |\nsample()\n |')
print('got instance of: {}\nholding data: {}\nmetadata: {}'.
format(type(trial), trial.filename, trial.metadata))
print(' |\nsample()\n |')
print('got instance of: {}\nholding data: {}\nmetadata: {}'.
format(type(episode), episode.filename, episode.metadata))
print(' |\nto_btfeed()\n |')
print('got instance of: {}\n...holding default data line: {}, wi ready to be fed to bt.Cerebro.'.format(type(data_feed), data_feed))
"""
Explanation: Now print whole path:
End of explanation
"""
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Note:
BTgymDataSet class used in most examples is simply special case where we set Trial=Episode by definition.
This nested data structure is intended mostly for upcoming implementation of meta-learning and guided-policy-search algorithms.
Skewed sampling
End of explanation
"""
alpha, beta, size = 2, 1, 500
y = np.random.beta(alpha, beta, size)
plt.figure(0)
plt.suptitle(
'Beta-distribution with parameters: alpha={}, beta={}'.format(alpha, beta), fontsize=12
)
plt.grid(True)
fig = plt.hist(y, histtype='stepfilled', bins=500)
"""
Explanation: Beta-distribution quick recap:
End of explanation
"""
# Suppose Trial is ready, if no - use cells above to get one
trial.set_logger(level=13) # make it qiuet
train_num_samples = 600
test_num_samples = 300
# Beta distribution paramters:
alpha = 3 # give priority to recent samples
beta = 1
train_start = np.zeros(train_num_samples)
test_start = np.zeros(test_num_samples)
# Sample train episode and get time of first record as some realtive number:
for i in range(train_num_samples):
train_start[i] = trial.sample(True, 0, alpha, beta).data[0:1].index[0].value / 1e18
# Sample test episode and get time of first record as number:
for i in range(test_num_samples):
test_start[i] = trial.sample(True, 1, alpha, beta).data[0:1].index[0].value / 1e18
fig = plt.figure(1, figsize=(18,8))
plt.suptitle('Begining times distribution for episodes within single trial', fontsize=16)
plt.xlabel('Realtive trial time', fontsize=14)
plt.ylabel('Number of episodes', fontsize=14)
fig.text(0.25, 0.8, 'Train interval, alpha={}, beta={}'.format(alpha, beta), fontsize=14)
fig.text(0.75, 0.8, 'Test interval, uniform', fontsize=14)
fig.text(0.25, 0.5, 'Note holidays empty gaps', fontsize=12)
plt.grid(True)
fig = plt.hist(train_start, histtype='stepfilled', bins=train_num_samples)
fig = plt.hist(test_start, histtype='stepfilled', bins=test_num_samples)
"""
Explanation: What it looks like for an episode:
here we plot a histogram of first record time-dates for set of sampled test and train episodes
End of explanation
"""
|
AllenDowney/ModSimPy
|
soln/chap11soln.ipynb
|
mit
|
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
"""
Explanation: Modeling and Simulation in Python
Chapter 11
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
init = State(S=89, I=1, R=0)
"""
Explanation: SIR implementation
We'll use a State object to represent the number (or fraction) of people in each compartment.
End of explanation
"""
init /= sum(init)
"""
Explanation: To convert from number of people to fractions, we divide through by the total.
End of explanation
"""
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
"""
Explanation: make_system creates a System object with the given parameters.
End of explanation
"""
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
"""
Explanation: Here's an example with hypothetical values for beta and gamma.
End of explanation
"""
def update_func(state, t, system):
"""Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
"""
Explanation: The update function takes the state during the current time step and returns the state during the next time step.
End of explanation
"""
state = update_func(init, 0, system)
"""
Explanation: To run a single time step, we call it like this:
End of explanation
"""
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
return state
"""
Explanation: Now we can run a simulation by calling the update function for each time step.
End of explanation
"""
run_simulation(system, update_func)
"""
Explanation: The result is the state of the system at t_end
End of explanation
"""
# Solution
tc = 4 # time between contacts in days
tr = 5 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
s0 = system.init.S
final = run_simulation(system, update_func)
s_end = final.S
s0 - s_end
"""
Explanation: Exercise Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?
Hint: what is the change in S between the beginning and the end of the simulation?
End of explanation
"""
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, t, system)
S[t+1], I[t+1], R[t+1] = state
return S, I, R
"""
Explanation: Using TimeSeries objects
If we want to store the state of the system at each time step, we can use one TimeSeries object for each state variable.
End of explanation
"""
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
S, I, R = run_simulation(system, update_func)
"""
Explanation: Here's how we call it.
End of explanation
"""
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', label='Susceptible')
plot(I, '-', label='Infected')
plot(R, ':', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
"""
Explanation: And then we can plot the results.
End of explanation
"""
plot_results(S, I, R)
savefig('figs/chap11-fig01.pdf')
"""
Explanation: Here's what they look like.
End of explanation
"""
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
frame = TimeFrame(columns=system.init.index)
frame.row[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
"""
Explanation: Using a DataFrame
Instead of making three TimeSeries objects, we can use one DataFrame.
We have to use row to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the DataFrame.
End of explanation
"""
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
results.head()
"""
Explanation: Here's how we run it, and what the result looks like.
End of explanation
"""
plot_results(results.S, results.I, results.R)
"""
Explanation: We can extract the results and plot them.
End of explanation
"""
# Solution
tc = 4 # time between contacts in days
tr = 5 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
plot_results(results.S, results.I, results.R)
"""
Explanation: Exercises
Exercise Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.
End of explanation
"""
|
AaronCWong/phys202-2015-work
|
assignments/assignment03/NumpyEx04.ipynb
|
mit
|
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
"""
Explanation: Numpy Exercise 4
Imports
End of explanation
"""
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
"""
Explanation: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$:
End of explanation
"""
def complete_deg(n):
"""Return the integer valued degree matrix D for the complete graph K_n."""
a = np.ones(n,n)
x = 0
y = 0
while x <= n:
x = x+1
while y <= n:
y = y+1
if x == y:
a[x,y] == a[n-1,n-1]
return a
va.vizarray(a)
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
"""
Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
End of explanation
"""
def complete_adj(n):
"""Return the integer valued adjacency matrix A for the complete graph K_n."""
# YOUR CODE HERE
raise NotImplementedError()
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
"""
Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
"""
Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
End of explanation
"""
|
numerical-mooc/assignment-bank-2015
|
cdigangi8/Managing_Epidemics_Model.ipynb
|
mit
|
%matplotlib inline
import numpy
from matplotlib import pyplot
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
"""
Explanation: Copyright (c)2015 DiGangi, C.
Managing Epidemics Through Mathematical Modeling
This lesson will examine the spread of an epidemic over time using Euler's method. The model is a system of non-linear ODEs which is based on the classic Susceptible, Infected, Recovered (SIR) model. This model introduces a new parameter to include vaccinations. We will examine the various paremeters of the model and define conditions necessary to erradicate the epidemic.
In this module we will also introduce ipywigets, an IPython library that allows you to add widgets to your notebooks and make them interactive! We will be using widgets to vary our parameters and see how changing different parameters affects the results of the model. This is a great technique for making quick and easy comparisons because you don't have to re-run your cell for the widget to make changes to the graph.
Introducing Model Parameters
The most important part of understanding any model is understanding the nomenclature that is associated with it. Please review the below terms carefully and make sure you understand what each parameter represents.
$S$: Susceptible Individuals
$V$: Vaccinated Individuals
$I$: Infected Individuals
$R$: Recovered Individuals with Immunity (Cannot get infected again)
$p$: Fraction of individuals who are vaccinated at birth
$e$: Fraction of the vaccinated individuals that are successfully vaccinated
$\mu$: Average Death Rate
$\beta$: Contact Rate (Rate at which Susceptibles come into contact with Infected)
$\gamma$: Recovery Rate
$R_0$: Basic Reporoduction Number
$N$: Total Population ($S + V + I + R$)
Basic SVIR Model
Model Assumptions
The model will make the following assumptions:
The population N is held constant
The birth rate and death rate are equal
The death rate is the same across all individuals (Infected do not have higher death rate)
A susceptible individual that comes in contact with an infected automatically becomes infected
Once an individual has recovered they are forever immune and not reintroduced into the susceptible population
Vaccination does not wear off (vaccinated cannot become infected)
Susceptible Equation
Let's examine the model by component. First we will breakdown the equation for susceptible individuals. In order to find the rate of change of susceptible individuals we must calculate the number of newborns that are not vaccinated:
$$(1-ep) \mu N$$
The number of Susceptible Individuals that become infected:
$$ \beta IS_{infections}$$
and finally the number of Susceptibles that die:
$$ \mu S_{deaths}$$
Therefore the change in Susceptible Indivduals becomes:
$$\frac{dS}{dt} = (1-ep) \mu N - \beta IS - \mu S$$
Vaccinated Equation
Now examining the vaccinated individuals we start with the newborns that are vaccinated:
$$ep \mu N$$
And the number of vaccinated individuals that die:
$$\mu V$$
The change in vaccinated individuals becomes:
$$\frac{dV}{dt} = ep \mu N - \mu V$$
Infected Equation
For the infected individuals we start with the number of Susceptible individuals that are exposed and become infected:
$$\beta IS_{infections}$$
Next we need the number of Infected individuals that recovered:
$$\gamma I_{recoveries}$$
Finally we examine the infected who die:
$$\mu I_{deaths}$$
Putting this all together we get the following equation:
$$\frac{dI}{dt} = \beta IS - \gamma I - \mu I$$
Recovered Equation
The number of recovered individuals first relies on the infected who recover:
$$\gamma I$$
Next it depeds on the recovered individuals who die:
$$\mu R$$
Putting this together yeilds the equation:
$$\frac{dR}{dt} = \gamma I - \mu R$$
Model Summary
The complete model is as follows:
$$\frac{dS}{dt} = (1-ep) \mu N - \beta IS - \mu S$$
$$\frac{dV}{dt} = ep \mu N - \mu V$$
$$\frac{dI}{dt} = \beta IS - \gamma I - \mu I$$
$$\frac{dR}{dt} = \gamma I - \mu R$$
This is a very simplified model because of the complexities of infectious diseases.
Implementing Numerical Solution with Euler!
For the numerical solution we will be using Euler's method since we are only dealing with time derivatives. Just to review, for Euler's method we replace the time derivative by the following:
$$\frac{dS}{dt} = \frac{S^{n+1} - S^n}{\Delta t}$$
where n represents the discretized time.
Therefore after we discretize our model we have:
$$\frac{S^{n+1} - S^n}{\Delta t} = (1-ep) \mu N - \beta IS^n - \mu S^n$$
$$\frac{V^{n+1} - V^n}{\Delta t} = ep \mu N - \mu V^n$$
$$\frac{I^{n+1} - I^n}{\Delta t} = \beta I^nS^n - \gamma I^n - \mu I^n$$
$$\frac{R^{n+1} - R^n}{\Delta t} = \gamma I^n - \mu R^n$$
And now solving for the value at the next time step yeilds:
$$S^{n+1} = S^n + \Delta t \left((1-ep) \mu N - \beta IS^n - \mu S^n \right)$$
$$V^{n+1} = V^n + \Delta t ( ep \mu N - \mu V^n)$$
$$I^{n+1} = I^n + \Delta t (\beta I^nS^n - \gamma I^n - \mu I^n)$$
$$R^{n+1} = R^n + \Delta t ( \gamma I^n - \mu R^n)$$
If we want to implement this into our code we can build arrays to hold our system of equations. Assuming u is our solution matrix and f(u) is our right hand side:
\begin{align}
u & = \begin{pmatrix} S \ V \ I \ R \end{pmatrix} & f(u) & = \begin{pmatrix} S^n + \Delta t \left((1-ep) \mu N - \beta IS^n - \mu S^n \right) \ V^n + \Delta t ( ep \mu N - \mu V^n) \ I^n + \Delta t (\beta I^nS^n - \gamma I^n - \mu I^n) \ R^n + \Delta t ( \gamma I^n - \mu R^n) \end{pmatrix}.
\end{align}
Solve!
Now we will implement this solution below. First we will import the necessary python libraries
End of explanation
"""
def f(u):
"""Returns the right-hand side of the epidemic model equations.
Parameters
----------
u : array of float
array containing the solution at time n.
u is passed in and distributed to the different components by calling the individual value in u[i]
Returns
-------
du/dt : array of float
array containing the RHS given u.
"""
S = u[0]
V = u[1]
I = u[2]
R = u[3]
return numpy.array([(1-e*p)*mu*N - beta*I*S - mu*S,
e*p*mu*N - mu*V,
beta*I*S - gamma*I - mu*I,
gamma*I - mu*R])
"""
Explanation: Let us first define our function $f(u)$ that will calculate the right hand side of our model. We will pass in the array $u$ which contains our different populations and set them individually in the function:
End of explanation
"""
def euler_step(u, f, dt):
"""Returns the solution at the next time-step using Euler's method.
Parameters
----------
u : array of float
solution at the previous time-step.
f : function
function to compute the right hand-side of the system of equation.
dt : float
time-increment.
Returns
-------
approximate solution at the next time step.
"""
return u + dt * f(u)
"""
Explanation: Next we will define the euler solution as a function so that we can call it as we iterate through time.
End of explanation
"""
e = .1 #vaccination success rate
p = .75 # newborn vaccination rate
mu = .02 # death rate
beta = .002 # contact rate
gamma = .5 # Recovery rate
S0 = 100 # Initial Susceptibles
V0 = 50 # Initial Vaccinated
I0 = 75 # Initial Infected
R0 = 10 # Initial Recovered
N = S0 + I0 + R0 + V0 #Total population (remains constant)
"""
Explanation: Now we are ready to set up our initial conditions and solve! We will use a simplified population to start with.
End of explanation
"""
T = 365 # Iterate over 1 year
dt = 1 # 1 day
N = int(T/dt)+1 # Total number of iterations
t = numpy.linspace(0, T, N) # Time discretization
u = numpy.zeros((N,4)) # Initialize the solution array with zero values
u[0] = [S0, V0, I0, R0] # Set the initial conditions in the solution array
for n in range(N-1): # Loop through time steps
u[n+1] = euler_step(u[n], f, dt) # Get the value for the next time step using our euler_step function
"""
Explanation: Now we will implement our discretization using a for loop to iterate over time. We create a numpy array $u$ that will hold all of our values at each time step for each component (SVIR). We will use dt of 1 to represent 1 day and iterate over 365 days.
End of explanation
"""
pyplot.figure(figsize=(15,5))
pyplot.grid(True)
pyplot.xlabel(r'time', fontsize=18)
pyplot.ylabel(r'population', fontsize=18)
pyplot.xlim(0, 500)
pyplot.title('Population of SVIR model over time', fontsize=18)
pyplot.plot(t,u[:,0], color= 'red', lw=2, label = 'Susceptible');
pyplot.plot(t,u[:,1], color='green', lw=2, label = 'Vaccinated');
pyplot.plot(t,u[:,2], color='black', lw=2, label = 'Infected');
pyplot.plot(t,u[:,3], color='blue', lw=2, label = 'Recovered');
pyplot.legend();
"""
Explanation: Now we use python's pyplot library to plot all of our results on the same graph:
End of explanation
"""
#Changing the following parameters
e = .5 #vaccination success rate
gamma = .1 # Recovery rate
S0 = 100 # Initial Susceptibles
V0 = 50 # Initial Vaccinated
I0 = 75 # Initial Infected
R0 = 10 # Initial Recovered
N = S0 + I0 + R0 + V0 #Total population (remains constant)
T = 365 # Iterate over 1 year
dt = 1 # 1 day
N = int(T/dt)+1 # Total number of iterations
t = numpy.linspace(0, T, N) # Time discretization
u = numpy.zeros((N,4)) # Initialize the solution array with zero values
u[0] = [S0, V0, I0, R0] # Set the initial conditions in the solution array
for n in range(N-1): # Loop through time steps
u[n+1] = euler_step(u[n], f, dt) # Get the value for the next time step using our euler_step function
pyplot.figure(figsize=(15,5))
pyplot.grid(True)
pyplot.xlabel(r'time', fontsize=18)
pyplot.ylabel(r'population', fontsize=18)
pyplot.xlim(0, 500)
pyplot.title('Population of SVIR model over time', fontsize=18)
pyplot.plot(t,u[:,0], color= 'red', lw=2, label = 'Susceptible');
pyplot.plot(t,u[:,1], color='green', lw=2, label = 'Vaccinated');
pyplot.plot(t,u[:,2], color='black', lw=2, label = 'Infected');
pyplot.plot(t,u[:,3], color='blue', lw=2, label = 'Recovered');
pyplot.legend();
"""
Explanation: The graph is interesting because it exhibits some oscillating behavior. You can see that under the given parameters, the number of infected people drops within the first few days. Notice that the susceptible individuals grow until about 180 days. The return of infection is a result of too many susceptible people in the population. The number of infected looks like it goes to zero but it never quite reaches zero. Therfore, when we have $\beta IS$, when $S$ gets large enough the infection will start to be reintroduced into the population.
If we want to examine how the population changes under new conditions, we could re-run the below cell with new parameters:
End of explanation
"""
from ipywidgets import interact, HTML, FloatSlider
from IPython.display import clear_output, display
"""
Explanation: However, every time we want to examine new parameters we have to go back and change the values within the cell and re run our code. This is very cumbersome if we want to examine how different parameters affect our outcome. If only there were some solution we could implement that would allow us to change parameters on the fly without having to re-run our code...
ipywidgets!
Well there is a solution we can implement! Using a python library called ipywidgets we can build interactive widgets into our notebook that allow for user interaction. If you do not have ipywidets installed, you can install it using conda by simply going to the terminal and typing:
conda install ipywidgets
Now we will import our desired libraries
End of explanation
"""
def z(x):
print(x)
interact(z, x=True) # Checkbox
interact(z, x=10) # Slider
interact(z, x='text') # Text entry
"""
Explanation: The below cell is a quick view of a few different interactive widgets that are available. Notice that we must define a function (in this case $z$) where we call the function $z$ and parameter $x$, where $x$ is passed into the function $z$.
End of explanation
"""
def f(u, init):
"""Returns the right-hand side of the epidemic model equations.
Parameters
----------
u : array of float
array containing the solution at time n.
u is passed in and distributed to the different components by calling the individual value in u[i]
init : array of float
array containing the parameters for the model
Returns
-------
du/dt : array of float
array containing the RHS given u.
"""
S = u[0]
V = u[1]
I = u[2]
R = u[3]
p = init[0]
e = init[1]
mu = init[2]
beta = init[3]
gamma = init[4]
return numpy.array([(1-e*p)*mu*N - beta*I*S - mu*S,
e*p*mu*N - mu*V,
beta*I*S - gamma*I - mu*I,
gamma*I - mu*R])
"""
Explanation: Redefining the Model to Accept Parameters
In order to use ipywidgets and pass parameters in our functions we have to slightly redefine our functions to accept these changing parameters. This will ensure that we don't have to re-run any code and our graph will update as we change parameters!
We will start with our function $f$. This function uses our initial parameters $p$, $e$, $\mu$, $\beta$, and $\gamma$. Previously, we used the global definition of these variables so we didn't include them inside the function. Now we will be passing in both our array $u$ (which holds the different populations) and a new array called $init$ (which holds our initial parameters).
End of explanation
"""
def euler_step(u, f, dt, init):
return u + dt * f(u, init)
"""
Explanation: Now we will change our $euler step$ function which calls our function $f$ to include the new $init$ array that we are passing.
End of explanation
"""
#Build slider for each parameter desired
pSlider = FloatSlider(description='p', min=0, max=1, step=0.1)
eSlider = FloatSlider(description='e', min=0, max=1, step=0.1)
muSlider = FloatSlider(description='mu', min=0, max=1, step=0.005)
betaSlider = FloatSlider(description='beta', min=0, max=.01, step=0.0005)
gammaSlider = FloatSlider(description='gamma', min=0, max=1, step=0.05)
#Update function will update the plotted graph every time a slider is changed
def update():
"""Returns a graph of the new results for a given slider parameter change.
Parameters
----------
p : float value of slider widget
e : float value of slider widget
mu : float value of slider widget
beta : float value of slider widget
gamma : float value of slider widget
Returns
-------
Graph representing new populations
"""
#the following parameters use slider.value to get the value of the given slider
p = pSlider.value
e = eSlider.value
mu = muSlider.value
beta = betaSlider.value
gamma = gammaSlider.value
#inital population
S0 = 100
V0 = 50
I0 = 75
R0 = 10
N = S0 + I0 + R0 + V0
#Iteration parameters
T = 365
dt = 1
N = int(T/dt)+1
t = numpy.linspace(0, T, N)
u = numpy.zeros((N,4))
u[0] = [S0, V0, I0, R0]
#Array of parameters
init = numpy.array([p,e,mu,beta,gamma])
for n in range(N-1):
u[n+1] = euler_step(u[n], f, dt, init)
#Plot of population with gicen slider parameters
pyplot.figure(figsize=(15,5))
pyplot.grid(True)
pyplot.xlabel(r'time', fontsize=18)
pyplot.ylabel(r'population', fontsize=18)
pyplot.xlim(0, 500)
pyplot.title('Population of SVIR model over time', fontsize=18)
pyplot.plot(t,u[:,0], color= 'red', lw=2, label = 'Susceptible');
pyplot.plot(t,u[:,1], color='green', lw=2, label = 'Vaccinated');
pyplot.plot(t,u[:,2], color='black', lw=2, label = 'Infected');
pyplot.plot(t,u[:,3], color='blue', lw=2, label = 'Recovered');
pyplot.legend();
#Clear the output otherwise it will create a new graph every time so you will end up with multiple graphs
clear_output(True) #This ensures it recreates the data on the initial graph
#Run the update function on slider values change
pSlider.on_trait_change(update, 'value')
eSlider.on_trait_change(update, 'value')
muSlider.on_trait_change(update, 'value')
betaSlider.on_trait_change(update, 'value')
gammaSlider.on_trait_change(update, 'value')
display(pSlider, eSlider, muSlider, betaSlider, gammaSlider) #Display sliders
update() # Run initial function
"""
Explanation: In order to make changes to our parameters, we will use slider widgets. Now that we have our functions set up, we will build another function which we will use to update the graph as we move our slider parameters. First we must build the sliders for each parameter. Using the FloatSlider method from ipywidgets, we can specify the min and max for our sliders and a step to increment.
Next we build the update function which will take in the values of the sliders as they change and re-plot the graph. The function follows the same logic as before with the only difference being the changing parameters.
Finally we specify the behavior of the sliders as they change values and call our update function.
End of explanation
"""
Disease = [{'name': "Ebola", 'p': 0, 'e': 0, 'mu': .04, 'beta': .005, 'gamma': 0}, \
{'name': "Measles", 'p': .9, 'e': .9, 'mu': .02, 'beta': .002, 'gamma': .9}, \
{'name': "Tuberculosis", 'p': .5, 'e': .2, 'mu': .06, 'beta': .001, 'gamma': .3}]
#Example
def z(x):
print(x)
interact(z, x = 'Text')
"""
Explanation: Notice that the graph starts with all parameters equal to zero. Unfortunately we cannot set the initial value of the slider. We can work around this using conditional statements to see if the slider values are equal to zero, then use different parameters.
Notice that as you change the parameters the graph starts to come alive! This allows you to quickly compare how different parameters affect the results of our model!
Dig deeper?
Using the ipywidget library, create a new function that allows for user input. Using the python array of objects below, which contains various diseases and their initial parameters, have the user type in one of the disease names and return the graph corresponding to that disease! You can use the ipywidget text box to take in the value from the user and then pass that value to a function that will call out that disease from the object below!
End of explanation
"""
from IPython.core.display import HTML
css_file = 'numericalmoocstyle.css'
HTML(open(css_file, "r").read())
"""
Explanation: References
Scherer, A. and McLean, A. "Mathematical Models of Vaccination", British Medical Bulletin Volume 62 Issue 1, 2015 Oxford University Press. Online
Barba, L., "Practical Numerical Methods with Python" George Washington University
For a good explanation of some of the simpler models and overview of parameters, visit this Wiki Page
Slider tutorial posted on github
End of explanation
"""
|
Neuroglycerin/neukrill-net-work
|
notebooks/3 convolutional layers (96-96-48 channels) 2 fully connected (512-512 units).ipynb
|
mit
|
print('## Model structure summary\n')
print(model)
params = model.get_params()
n_params = {p.name : p.get_value().size for p in params}
total_params = sum(n_params.values())
print('\n## Number of parameters\n')
print(' ' + '\n '.join(['{0} : {1} ({2:.1f}%)'.format(k, v, 100.*v/total_params)
for k, v in sorted(n_params.items(), key=lambda x: x[0])]))
print('\nTotal : {0}'.format(total_params))
"""
Explanation: Model summary
Run done with model with three convolutional layers, two fully connected layers and a final softmax layer, with 64 channels per convolutional layer in first two layers and 48 in final. Fully connected layers have 512 units each. Dropout applied in first (larger) fully connected layer (dropout probability 0.5) and random augmentation of dataset with uniform random rotations, shunting and scaling.
End of explanation
"""
tr = np.array(model.monitor.channels['valid_y_y_1_nll'].time_record) / 3600.
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(111)
ax1.plot(model.monitor.channels['valid_y_y_1_nll'].val_record)
ax1.plot(model.monitor.channels['train_y_y_1_nll'].val_record)
ax1.set_xlabel('Epochs')
ax1.legend(['Valid', 'Train'])
ax1.set_ylabel('NLL')
ax1.set_ylim(0., 5.)
ax1.grid(True)
ax2 = ax1.twiny()
ax2.set_xticks(np.arange(0,tr.shape[0],20))
ax2.set_xticklabels(['{0:.2f}'.format(t) for t in tr[::20]])
ax2.set_xlabel('Hours')
print("Minimum validation set NLL {0}".format(min(model.monitor.channels['valid_y_y_1_nll'].val_record)))
"""
Explanation: Train and valid set NLL trace
End of explanation
"""
pv = get_weights_report(model=model)
img = pv.get_img()
img = img.resize((8*img.size[0], 8*img.size[1]))
img_data = io.BytesIO()
img.save(img_data, format='png')
display(Image(data=img_data.getvalue(), format='png'))
"""
Explanation: Visualising first layer weights
Quite nice features appear to have been learned with some kernels appearing to have been learned at various rotations. Some quite small scale features appear to have been learned too.
End of explanation
"""
plt.plot(model.monitor.channels['learning_rate'].val_record)
"""
Explanation: Learning rate
Initially linear decay learning rate schedule used with monitor based adjuster. Turns out these don't play well together as the linear decay schedule overwrites any adjusments by monitor based extension at the next epoch. After resume initial learning rate manually reduced and learning rate schedule set exclusively with monitor based adjuster.
End of explanation
"""
h1_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h1_W_kernel_norm_mean'].val_record])
h1_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h1_kernel_norms_mean'].val_record])
plt.plot(h1_W_norms / h1_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h1_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h1_kernel_norms_max'].val_record)
h2_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h2_W_kernel_norm_mean'].val_record])
h2_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h2_kernel_norms_mean'].val_record])
plt.plot(h2_W_norms / h2_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h2_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h2_kernel_norms_max'].val_record)
h3_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h3_W_kernel_norm_mean'].val_record])
h3_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h3_kernel_norms_mean'].val_record])
plt.plot(h3_W_norms / h3_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h3_kernel_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h3_kernel_norms_max'].val_record)
h4_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h4_W_col_norm_mean'].val_record])
h4_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h4_col_norms_mean'].val_record])
plt.plot(h4_W_norms / h4_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h4_col_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h4_col_norms_max'].val_record)
h5_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h5_W_col_norm_mean'].val_record])
h5_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h5_col_norms_mean'].val_record])
plt.plot(h5_W_norms / h5_W_up_norms)
plt.show()
plt.plot(model.monitor.channels['valid_h5_col_norms_mean'].val_record)
plt.plot(model.monitor.channels['valid_h5_col_norms_max'].val_record)
"""
Explanation: Update norm monitoring
Ratio of update norms to parameter norms across epochs for different layers plotted to give idea of how learning rate schedule performing.
End of explanation
"""
|
amcdawes/QMlabs
|
Chapter 1 - Mathematical Preliminaries.ipynb
|
mit
|
import matplotlib.pyplot as plt
from numpy import array, sin, sqrt, dot, outer
%matplotlib inline
"""
Explanation: Chapter 1
An introduction to the Jupyter Notebook and some practice with probability ideas from Chapter 1.
1.1 Probability
1.1.1 Moments of Measured Data
The Jupyter Notebook has two primary types of cells "Markdown" cells for text (like this one) and "Code" cells for running python code. The cell below this one is a code cell that loads the plotting functions into the plt namespace and loads several functions from the numpy library. The last line requests that all plots show up inline in the notebook (instead of in other windows or as files on your computer).
End of explanation
"""
x = array([1,2,3])
x.sum()
x.mean()
"""
Explanation: Arrays of data, and their average and sum in Python. We use some definitions from numpy. Notice the way these operators can be applied to arrays (with the "." operator).
End of explanation
"""
x.sum()/len(x)
"""
Explanation: The formal definition (and to make sure we match with the book) is to take the sum and divide by the number of items in the sample:
End of explanation
"""
x**2
(x**2).sum() # Note the parenthesis!
(x**2).sum()/len(x)
(x**2).mean()
"""
Explanation: Higher order moments
Operate on the sample, python does this element-by-element, then do the same thing as above. You may be surprised that to raise a power is "**" instead of "^". This is a difference between python and other languages, just something to keep in mind!
End of explanation
"""
sin(x)
sin(x).sum()/len(x)
sin(x).mean()
"""
Explanation: It works for functions too
sin(x) will find the sin of each element in the array x:
End of explanation
"""
x.var() # Variance
x.std() # Standard Deviation
"""
Explanation: Variance
End of explanation
"""
x.std()**2 == x.var() # Related by a square root
"""
Explanation: We can test if two quanties are equal with the == operator. Not the same as = since the = is an assignment operator. This will trip you up if you are new to programming, but you'll get over it.
End of explanation
"""
x_m = array([9,5,25,23,10,22,8,8,21,20])
x_m.mean()
(x_m**2).sum()/len(x_m)
sqrt(281.3 - (15.1)**2)
x_m.std()
sqrt(281.3 - (15.1)**2) == x_m.std()
"""
Explanation: Example 1.1
End of explanation
"""
n, bins, patches = plt.hist(x_m,bins=7)
"""
Explanation: Close enough!
1.1.2 Probability
Example 1.2
This is an illustration of how to implement the histogram from Example 1.2 in the text. Note the use of setting the number of bins. The hist command will pick for you, and you should try other values to see the impact. There is no one correct value, but the too many bins doesn't illustrate clusters of data, and too-few bins tends to oversimplify the data.
End of explanation
"""
# an array of the counts in each bin:
n
n/10.0*array([6,9,12,15,18,21,24]) # counts times each bin-center value
# sum of the last cell should be the mean:
sum(_)
n/10.0*array([6,9,12,15,18,21,24])**2 # counts times each bin-center value
# sum of the last cell should be the second moment:
sum(_)
"""
Explanation: The hist function has several possible arguments, we use bins=7 to match the example.
End of explanation
"""
rvec = array([1,2]) # A row vector
rvec
cvec = array([[1],[2]]) # A column vector
cvec
cvec*rvec # Actually the outer product:
rvec*cvec # still the outer product... so this simple `*` doesn't respect the rules of linear algebra!
"""
Explanation: Both of these results are close to the previous value, but not exact. Remember, the historgram is a representation of the data and the agreement will improve for larger data sets.
1.2 Linear Algebra
1.2.1 Vectors and Basis sets
We'll use the qutip library later, even though this is all just linear algebra. For now, try using standard-python for vector math:
End of explanation
"""
dot(rvec,cvec)
outer(cvec,rvec)
dot(cvec,rvec) # This doesn't work, because `dot` knows what shape the vectors should be
"""
Explanation: The dot function properly computes the dot product that we know and love from workshop physics:
End of explanation
"""
|
rtidatascience/connected-nx-tutorial
|
notebooks/3. Visualizing Graphs.ipynb
|
mit
|
import networkx as nx
import matplotlib.pyplot as plt
%matplotlib inline
GA = nx.read_gexf('../data/ga_graph.gexf')
print(nx.info(GA))
"""
Explanation: Visualizing Graphs
Basic NetworkX & Matplotlib (nx.draw)
Detailed Plotting w/ Networkx & Matplotlib
Plotting attributes
<center><img src="https://i2.wp.com/flowingdata.com/wp-content/uploads/2015/07/Disney-strategy-chart-from-1957.jpg" width="540"></center>
Vizualizing networks is a complicated problem -- how do you position the nodes and edges in a way such that no nodes overlap, connected nodes are near each other, none of the labels overlap? Typically we use what is called a layout to plot or visualize networks. A layout is an algorithm used to position nodes and edges on a plot automatically in aesthetically and informationally satisfactory ways.
There are several different layout algorithms, but the most common is a force-directed layout. These layout algorithms are based off of physical repulsion and spring systems. In general, the rule for force-directed layouts is: repel all nodes, and model connections between nodes as 'springs', with the result that more connected nodes will be closer together.
One important issue is that each layout typically has random initial conditions. Running a plot function twice will return two different plots, both following the rules of the algorithm, but differing due to the initial conditions of the layout.
End of explanation
"""
# Easiest Way
nx.draw(GA, with_labels=True)
# Graph Layouts are random...
nx.draw(GA, with_labels=True)
"""
Explanation: NetworkX with Matplotlib
Pros:
- Easy
- Some customization
Cons:
- Looks "dated" (not great for publication / productizing)
- Not interactive
- Few Layout Options
End of explanation
"""
# Some matplotlib options
plt.figure(figsize=(8,8))
plt.axis('off')
# generate the layout and place nodes and edges
layout = nx.circular_layout(GA)
# plot nodes, labels, and edges with options
nx.draw_networkx_nodes(GA, pos=layout, node_size=500, alpha=0.8)
nx.draw_networkx_edges(GA, pos=layout, width=3, style='dotted',
edge_color='orange')
nx.draw_networkx_labels(GA, pos=layout, font_size=15)
plt.show()
"""
Explanation: NetworkX Detailed Plotting
End of explanation
"""
from seaborn import color_palette, set_style, palplot
dead_or_alive = {
'karev' : 'alive',
'hank' : 'alive',
'izzie' : 'alive',
'mrs. seabury' : 'alive',
'ben' : 'alive',
'grey' : 'alive',
'sloan' : 'dead',
'steve' : 'alive',
'kepner' : 'alive',
'colin' : 'alive',
'avery' : 'alive',
'bailey' : 'alive',
'chief' : 'alive',
'preston' : 'alive',
'ellis grey' : 'dead',
"o'malley" : 'dead',
'lexi' : 'dead',
'torres' : 'alive',
'yang' : 'alive',
'addison' : 'alive',
'olivia' : 'alive',
'altman' : 'alive',
'denny' : 'dead',
'arizona' : 'alive',
'adele' : 'dead',
'derek' : 'dead',
'nancy' : 'alive',
'thatch grey' : 'alive',
'susan grey' : 'dead',
'owen' : 'alive',
'tucker' : 'alive',
'finn' : 'alive'
}
"""
Explanation: Detailed Plotting with Colors by Attribute
End of explanation
"""
# apply the dead_or_alive mapping of desceased characters
nx.set_node_attributes(GA, 'status', dead_or_alive)
def create_color_map(G, attribute, seaborn_palette="colorblind"):
"""Return a list of hex color mappings for node attributes"""
attributes = [G.node[label][attribute] for label in G.nodes()]
# get the set of possible attributes
attributes_unique = list(set(attributes))
num_values = len(attributes_unique)
# generate color palette from seaborn
palette = color_palette(seaborn_palette, num_values).as_hex()
# create a mapping of attribute to color
color_map = dict(zip(attributes_unique, palette))
# map the attribute for each node to the color it represents
node_colors = [color_map[attribute] for attribute in attributes]
return node_colors, color_map, palette
node_colors, color_map, palette = create_color_map(GA, 'status')
set_style('white')
plt.figure(figsize=(10,10))
plt.axis('off')
layout = nx.spring_layout(GA)
nx.draw_networkx_nodes(GA, layout, node_color=node_colors, node_size=500)
nx.draw_networkx_labels(GA, pos=layout, font_size=16)
nx.draw_networkx_edges(GA, layout, width=3)
plt.show()
# legend
print(color_map)
palplot(palette)
"""
Explanation: python
dead_or_alive = {
'karev' : 'alive',
'hank' : 'alive',
'sloan' : 'dead',
...
'finn' : 'alive'
}
End of explanation
"""
|
sempwn/ABCPRC
|
Tutorial_Epidemiology.ipynb
|
mit
|
%matplotlib inline
import ABCPRC as prc
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
"""
Explanation: ABC PRC Tutorial
In this tutorial we will be constructing our own individual-based model and performing model fitting on the resulting summary statistics it produces.
End of explanation
"""
def ibm(*ps):
lbda,delta,gamma = ps[0],ps[1],ps[2]
dt = 1.0 #time step
n = 100 #population number
T = 100 #total time simulation is run for.
class Person(object): #define individual
def __init__(self):
self.br = stats.gamma.rvs(gamma) if gamma > 0 else 0 #risk of exposure for individual
self.ps = 0 #number of parasites. This is zero initially.
def update(self):
births = stats.poisson.rvs(self.br * lbda * dt) if self.br * lbda * dt > 0 else 0 #imports of new worms in a time-step
deaths = stats.poisson.rvs(self.ps * delta) if self.ps >0 else 0 #number of deaths of worms
self.ps += (births-deaths)
if (self.ps < 0): self.ps = 0 #check number doesn't become negative.
people = []
for i in range(n):
people.append(Person()) #initialise population.
for t in range(T):
for i in range(n):
people[i].update() #run simulation for all individuals.
par_dist = []
for i in range(n):
par_dist.append(people[i].ps) #record final number of parasites for each individual.
return np.array(par_dist) # ibm function needs to return a numpy array
"""
Explanation: Define model
We firstly want to define a model to perform the fitting on. We'll take an example from parasitic epidemiology. Intestinal worms (Helminths) are picked up from the environment. Some individuals, due to behaviour and location may be more at risk of being exposed. We'll model this exposure using a gamma-distribution with shape parameter $\gamma$. This together with a constant background rate of infection $\lambda$ defines the rate at which worms are ingested and survive. Density independent death of the worms is also assumed at constant rate $\delta$. We may define the model ibm as
End of explanation
"""
%time xs = ibm(10.0,0.5,1.0)
plt.hist(xs);
"""
Explanation: Let's run the model and plot the results for some values.
We'll take the transmission potential as 10.0, the shape of transmission as 1.0 and the density-independent death rate as 0.5
End of explanation
"""
m = prc.ABC()
priors = [stats.expon(scale=10.0).rvs,stats.expon(scale=0.5).rvs,stats.expon(scale=1.0).rvs]
m.setup(modelFunc=ibm,xs=xs,priors=priors,method='Adaptive',toln=10)
"""
Explanation: Setting up the model fitting
The main module class is ABC. This can be initilaized without any arguments. The various aspects of ABC can then be setup using the setup() method. key-word arguments modelFunc defines the model used for fitting. xs defined the data and priors defines the prior distribution for each parameter. This is a list of random variable functions that can be taken from the scipy.stats module. setup can be called several times and also takes other arguments. Here we select the adaptive scheme and select the number of tolerances toln.
End of explanation
"""
m.fit(sample_size=30)
"""
Explanation: Fit tolerances to simulation
tolerances are used in ABC to find the best fitting parameters, where the priors may be a long way from the posterior. These can either be set manually using the setup function or the m.fit() method can be applied, which tries to automatically find a range of tolerances by randomly sampling from the prior distribution.
End of explanation
"""
m.run(100)
"""
Explanation: Run fitting
The fitting is performed using the run method, which takes the number of particles as its single parameter. More particles will mean a better fit, but may take more time
End of explanation
"""
res = m.trace()
plt.figure()
print('Initial Distribution')
m.trace(plot=True,tol=0)
plt.figure()
print('Middle Tolerance')
m.trace(plot=True,tol=5)
plt.figure()
print('Final Distribution')
m.trace(plot=True,tol=-1)
"""
Explanation: Explore results
We can explore the results of the fit at each tolerance level using the trace() method. Without keyword plot, the method returns the accepted particle for each parameter at all tolerance levels. With plot True
, the method plots the resulting particles at the first, middle tolerance and final tolerance.
End of explanation
"""
ps = np.round(m.paramMAP(),decimals=2)
print('MAP for infection rate is : {}, MAP for death rate is {} and MAP for heterogeneity is {}'.format(*ps))
res = m.fitSummary()
"""
Explanation: Find point estimates of fit
Use the paramMAP() method to find the maximum a posteriori for each of the parameters.
fitSummary() method also includes the 95% confidence intervals, based on the sampled posterior.
End of explanation
"""
m.save('parasite_model_example')
"""
Explanation: As you can see we've not done a bad job at fitting. True parameters are 10,0.5 and 1.0 which all over-lap with the 95% confidence intervals. We may now save the results using the save method.
End of explanation
"""
|
mit-crpg/openmc
|
examples/jupyter/triso.ipynb
|
mit
|
%matplotlib inline
from math import pi
import numpy as np
import matplotlib.pyplot as plt
import openmc
import openmc.model
"""
Explanation: Modeling TRISO Particles
OpenMC includes a few convenience functions for generationing TRISO particle locations and placing them in a lattice. To be clear, this capability is not a stochastic geometry capability like that included in MCNP. It's also important to note that OpenMC does not use delta tracking, which would normally speed up calculations in geometries with tons of surfaces and cells. However, the computational burden can be eased by placing TRISO particles in a lattice.
End of explanation
"""
fuel = openmc.Material(name='Fuel')
fuel.set_density('g/cm3', 10.5)
fuel.add_nuclide('U235', 4.6716e-02)
fuel.add_nuclide('U238', 2.8697e-01)
fuel.add_nuclide('O16', 5.0000e-01)
fuel.add_element('C', 1.6667e-01)
buff = openmc.Material(name='Buffer')
buff.set_density('g/cm3', 1.0)
buff.add_element('C', 1.0)
buff.add_s_alpha_beta('c_Graphite')
PyC1 = openmc.Material(name='PyC1')
PyC1.set_density('g/cm3', 1.9)
PyC1.add_element('C', 1.0)
PyC1.add_s_alpha_beta('c_Graphite')
PyC2 = openmc.Material(name='PyC2')
PyC2.set_density('g/cm3', 1.87)
PyC2.add_element('C', 1.0)
PyC2.add_s_alpha_beta('c_Graphite')
SiC = openmc.Material(name='SiC')
SiC.set_density('g/cm3', 3.2)
SiC.add_element('C', 0.5)
SiC.add_element('Si', 0.5)
graphite = openmc.Material()
graphite.set_density('g/cm3', 1.1995)
graphite.add_element('C', 1.0)
graphite.add_s_alpha_beta('c_Graphite')
"""
Explanation: Let's first start by creating materials that will be used in our TRISO particles and the background material.
End of explanation
"""
# Create TRISO universe
spheres = [openmc.Sphere(r=1e-4*r)
for r in [215., 315., 350., 385.]]
cells = [openmc.Cell(fill=fuel, region=-spheres[0]),
openmc.Cell(fill=buff, region=+spheres[0] & -spheres[1]),
openmc.Cell(fill=PyC1, region=+spheres[1] & -spheres[2]),
openmc.Cell(fill=SiC, region=+spheres[2] & -spheres[3]),
openmc.Cell(fill=PyC2, region=+spheres[3])]
triso_univ = openmc.Universe(cells=cells)
"""
Explanation: To actually create individual TRISO particles, we first need to create a universe that will be used within each particle. The reason we use the same universe for each TRISO particle is to reduce the total number of cells/surfaces needed which can substantially improve performance over using unique cells/surfaces in each.
End of explanation
"""
min_x = openmc.XPlane(x0=-0.5, boundary_type='reflective')
max_x = openmc.XPlane(x0=0.5, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.5, boundary_type='reflective')
max_y = openmc.YPlane(y0=0.5, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.5, boundary_type='reflective')
max_z = openmc.ZPlane(z0=0.5, boundary_type='reflective')
region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
"""
Explanation: Next, we need a region to pack the TRISO particles in. We will use a 1 cm x 1 cm x 1 cm box centered at the origin.
End of explanation
"""
outer_radius = 425.*1e-4
centers = openmc.model.pack_spheres(radius=outer_radius, region=region, pf=0.3)
"""
Explanation: Now we need to randomly select locations for the TRISO particles. In this example, we will select locations at random within the box with a packing fraction of 30%. Note that pack_spheres can handle up to the theoretical maximum of 60% (it will just be slow).
End of explanation
"""
trisos = [openmc.model.TRISO(outer_radius, triso_univ, center) for center in centers]
"""
Explanation: Now that we have the locations of the TRISO particles determined and a universe that can be used for each particle, we can create the TRISO particles.
End of explanation
"""
print(trisos[0])
"""
Explanation: Each TRISO object actually is a Cell, in fact; we can look at the properties of the TRISO just as we would a cell:
End of explanation
"""
centers = np.vstack([triso.center for triso in trisos])
print(centers.min(axis=0))
print(centers.max(axis=0))
"""
Explanation: Let's confirm that all our TRISO particles are within the box.
End of explanation
"""
len(trisos)*4/3*pi*outer_radius**3
"""
Explanation: We can also look at what the actual packing fraction turned out to be:
End of explanation
"""
box = openmc.Cell(region=region)
lower_left, upper_right = box.region.bounding_box
shape = (3, 3, 3)
pitch = (upper_right - lower_left)/shape
lattice = openmc.model.create_triso_lattice(
trisos, lower_left, pitch, shape, graphite)
"""
Explanation: Now that we have our TRISO particles created, we need to place them in a lattice to provide optimal tracking performance in OpenMC. We can use the box we created above to place the lattice in. Actually creating a lattice containing TRISO particles can be done with the model.create_triso_lattice() function. This function requires that we give it a list of TRISO particles, the lower-left coordinates of the lattice, the pitch of each lattice cell, the overall shape of the lattice (number of cells in each direction), and a background material.
End of explanation
"""
box.fill = lattice
"""
Explanation: Now we can set the fill of our box cell to be the lattice:
End of explanation
"""
universe = openmc.Universe(cells=[box])
geometry = openmc.Geometry(universe)
geometry.export_to_xml()
materials = list(geometry.get_all_materials().values())
openmc.Materials(materials).export_to_xml()
settings = openmc.Settings()
settings.run_mode = 'plot'
settings.export_to_xml()
plot = openmc.Plot.from_geometry(geometry)
plot.to_ipython_image()
"""
Explanation: Finally, let's take a look at our geometry by putting the box in a universe and plotting it. We're going to use the Fortran-side plotter since it's much faster.
End of explanation
"""
plot.color_by = 'material'
plot.colors = {graphite: 'gray'}
plot.to_ipython_image()
"""
Explanation: If we plot the universe by material rather than by cell, we can see that the entire background is just graphite.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.