repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
ProfessorKazarinoff/staticsite
|
content/code/matplotlib_plots/bar_plot_with_statistics_module_and_matplotlib.ipynb
|
gpl-3.0
|
# import packages
from statistics import mean, stdev
import matplotlib.pyplot as plt
#include if using a jupyter notebook, remove if using a .py file
%matplotlib inline
"""
Explanation: Engineers collect data and make conclusions based on the results. An important way to view results is with statistical charts. In this post we will build a bar chart to compare the tensile strength of 3D-printed ABS plastic compared to the tensile strength of 3D-printed HIPS plastic. We will add error bars to the chart to show the amount of uncertainty in the data. In the bar plot we construct, the height of the bars will represent the mean or average tensile stength. One bar will represent the average strength of ABS and the other bar will show the average strength of HIPS. We will then add error bars to the plot which will represent +1/-1 standard deviation about the mean.
We will use Python, the statistics module (part of the Python standard library), and matplotlib to build the bar plot. I recommend that undergraduate engineers use the Anaconda distribution of Python, which comes with matplotlib already installed. For help installing Anaconda, see a previous blog post: Installing Anaconda on Windows 10. If matplotlib is not available in your version of Python, open a terminal or the Anaconda Prompt and type:
$ pip install matplotlib
or
```
conda install matplotlib
```
The data we are going to plot is from the tensile testing of two different kinds of 3D-printed plastic, ABS and HIPS (HIPS stands for High Impact Polystyrene). You can download the data using the link below:
3D-printed-tensile-bar-data.xlsx
I'm constructing the plot in a jupyter notebook. You could also build the code in a .py file and run the code to produce the plot.
A note about using matplotlib on MacOSX: if you recieve an error message that matplotlib is not installed as a framework, consider using the Anaconda distribution of Python and running the code in a jupyter notebook.
To open a new jupyter notebook go to the Anaconda Prompt or a terminal and type:
```
jupyter notebook
```
Alternativly, you can start a new jupyter notebook by cliking the Windows start button and searching for [Anaconda3] --> [Jupyter Notebook]
If jupyter is not installed on your system, you can install it using the Anaconda Prompt or use a terminal and pip:
```
conda install jupyter
```
or
$ pip install jupyter
At the top of the jupyter notebook (or .py file), we need to import the required packages:
statistics (part of the Python standard library, but still needs to be imported) and
matplotlib
From the statistics module we will import two functions: mean (average) and stdev(standard deviation). If we use this import line:
from statistics import mean, stdev
We can use the names mean() and stdev() in our code. However, if we use a more general import line:
import statistics
Then we will need to call statistics.mean() and statistics.stdev() in our code.
matplotlib also needs to be imported. The typical way to do this is with the line:
import matplotlib.pyplot as plt
Then thoughout our code, we can use plt() instead of writing out matplotlib.pyplot() each time we want to use a matplotlib method.
The %matplotlib inline magic command is added so that we can see our plot right in the jupyter notebook. If you build the plot in a .py file, the %matplotlib inline command should be left out as it will return an error.
End of explanation
"""
# data
ABS = [18.6, 21.6 ,22, 21, 18, 20.9, 21, 19.3, 18.8, 20, 19.4, 16, 23.8, 19.3, 19.7, 19.5]
HIPS = [10.4, 4.9, 10.2, 10.5, 10.9, 12.9, 11.8, 8.4, 10, 10.6, 8.6, 9.7, 10.8, 10.7, 11, 12.4, 13.3, 11.4, 14.8, 13.5]
"""
Explanation: Create two variables which contains the data for ABS and HIPS as a list of individual tensile strength values.
After the import lines, we need to create two variables: one variable for the ABS data and one variable for HIPS data. We will assign the data points as a list of numbers saved in two variables. The general format to create a list in Python is to use list_name = [item1, item2, item3] with square brackets on the outside and commas between the items. The items for the two lists came from the .xlsx file that contains the data (3D-printed-tensile-bar-data.xlsx). The tensile strength is in column [F] labeled [Tensile Strength (Mpa)]. Rows 2-17 contain data for ABS and rows 18-37 contain data for HIPS.
End of explanation
"""
# find the mean using the mean() function from the statistics library
ABS_mean = mean(ABS)
HIPS_mean = mean(HIPS)
# find the standard deviation using the stdev() function from the statistics library
ABS_stdev = stdev(ABS)
HIPS_stdev = stdev(HIPS)
"""
Explanation: Find the mean and standard deviation for each set of data
We'll use the mean() and stdev() functions from the statistics module to find the mean (or average) and standard deviation of the two data sets. A summary of these two functions is below:
|statistics module function| description |
| --- | --- |
| mean() | calculate the mean or average of a list of numubers |
| stdev() | calculate the standard deviation of a list of numbers |
End of explanation
"""
# Build a bar plot
plt.bar(['ABS', 'HIPS'],[ABS_mean, HIPS_mean])
plt.show()
"""
Explanation: Build a simple bar plot
Matplotlib's bar plot fuction can be accessed using plt.bar(). We need to include at least two arguments as shown below:
plt.bar (['list', 'of' ,'bar', 'labels'], [list, of, bar, heights])
We will pass in ['ABS', 'HIPS'] for our list of bar labels, and [ABS_mean, HIPS_mean] for our list of bar heights. The command plt.show() will show the plot in a jupyter notebook or show the plot in a new window if running a .py file.
End of explanation
"""
# build a bar plot
plt.bar(['ABS', 'HIPS'],[ABS_mean, HIPS_mean])
plt.xlabel('3D-printer Fillament Material')
plt.ylabel('Tensile Strength (MPa)')
plt.title('Tensile Strength of 3-D Printed ABS and HIPS Tensile Bars')
plt.show()
"""
Explanation: Add axis labels and title
The plot looks pretty good, but we should add axis labels (with units) and a title to our plot. We can add the axis labels and titles with plt.xlabel(), plt.ylabel() and plt.title(). We need to pass in strings enclosed in quotes ' ' with these methods. A summary of the matplotlib functions is below:
|matplotlib function| description |
| --- | --- |
| plt.bar() | build a bar plot |
| plt.xlabel() | x-axis label |
| plt.ylabel() | y-axis label |
| plt.title() | plot title |
| plt.show() | show the plot |
End of explanation
"""
# build a bar plot
plt.bar(['ABS', 'HIPS'],[ABS_mean, HIPS_mean],yerr=[ABS_stdev, HIPS_stdev])
plt.xlabel('3D-printer Fillament Material')
plt.ylabel('Tensile Strength (MPa)')
plt.title('Tensile Strength of 3-D Printed ABS and HIPS Tensile Bars')
plt.show()
"""
Explanation: Add error bars to the plot
We have a nice looking bar plot with two bars, x-axis label, y-axis label and a title. Next we will add error bars to the plot. We will add the error bars by passing a keyword argument in the plt.bar() function. The keyword argument is yerr = [list, of, error, bar, lengths]. A keyword argument is a specific type of argument passed to a function or method that must have a name associated with it. Regular function arguments just need to be in the proper order. Keyword arguments need to be pass with the form keyword_argument_name = value. The general form of the entire plt.bar() line will be:
plt.bar (['list', 'of' ,'bar', 'labels'], [list, of, bar, hights], yerr=[list, of, error, bar, lengths])
The first two arguments, ['list', 'of' ,'bar', 'labels'] and [list, of, bar, hights] just need to be in the correct order. The third argument, a keyword argument needs to include yerr =.
Our list of error bar lengths will contain the standard deviation for each set of data, ABS_stdev and HIPS_stdev.
yerr=[ABS_stdev, HIPS_stdev]
End of explanation
"""
# build the bar plot
plt.bar(['ABS', 'HIPS'],[ABS_mean, HIPS_mean],yerr=[ABS_stdev, HIPS_stdev], capsize=10)
plt.xlabel('3D-printer Fillament Material')
plt.ylabel('Tensile Strength (MPa)')
plt.title('Tensile Strength of 3-D Printed ABS and HIPS Tensile Bars')
plt.show()
"""
Explanation: Add "caps" to the error bars
The error bars are on the plot, but they are just vertical lines. Typically, error bars have a horizontal lines at the top and bottom and look sort of like the capital letter I.
We can add these horizontal lines or "caps" to the top and bottom of the error bars by passing an additional keyword argument to the plt.bar() function called capsize=. We will set the capsize=10, which is a good size for this plot. You can change the capsize= number to make the horizontal lines longer or shorter.
Now our plt.bar() function call contains 4 different arguemnts:
plt.bar (['list of bar labels'], [list of bar hights], yerr = [list of error bar lengths], capsize = width)
A summary of the arugments passed to the plt.bar() function is below:
|plt.bar() Arguments | description |
|:------:|:------|
| [list of bar labels] | 1st argument, a list of strings which provide the labels below the bars |
| [list of bar heights] | 2nd argument, a list of numbers which determines the height of each bar |
| yerr = [list of error bar lengths] | a keyword argument, must include yerr =. Denotes the height of the error bars. Needs to be a list of numbers |
| capsize = width | a keyword argument, must include capsize =. Denotes the width of the error bar horizontal "caps". Needs to be a number, not a string |
End of explanation
"""
# build a bar plot and save it as a .png image
plt.bar(['ABS', 'HIPS'],[ABS_mean, HIPS_mean],yerr=[ABS_stdev, HIPS_stdev], capsize=10)
plt.xlabel('3D-printer Fillament Material')
plt.ylabel('Tensile Strength (MPa)')
plt.title('Tensile Strength of 3-D Printed ABS and HIPS Tensile Bars')
plt.savefig('plot.png')
plt.show()
"""
Explanation: Save the plot
The plot looks complete: two bars, x and y axis labels, title and error bars with caps. Now let's save the plot as an image file so we can import the plot into a Word document or PowerPoint presentation. If you are using a jupyter notebook, you can just right-click on the plot and select [copy image] or [Save Image As...]. To save a plot as an image programmatically, we use the line:
plt.savefig('filename.extension')
Matplotlib will save the plot as an image file using the file type we specify in the filename extension. For example, if we call plt.savefig('plot.png'), the plot will be saved as a .png image. If we call plt.savefig('plot.jpg') the plot will be saved as a .jpeg image.
End of explanation
"""
# build a bar plot and save it as a .png image
plt.bar(['ABS', 'HIPS'],[ABS_mean, HIPS_mean],yerr=[ABS_stdev, HIPS_stdev], capsize=10)
plt.xlabel('3D-printer Fillament Material')
plt.ylabel('Tensile Strength (MPa)')
plt.title('Tensile Strength of 3-D Printed ABS and HIPS Tensile Bars')
plt.savefig('plot.png', dpi = 300)
plt.show()
"""
Explanation: Increase the .png file image resolution
Depending on how the .png image file is viewed: in a jupyter notebook, on the web, in a Word document or in a PowerPoint presentation, the image may look a little blurry. This is because the .png image we created has a fairly low resolution. We can change the resolution by coding:
plt.savefig('filename.png', dpi = 300)
Where dpi=300 specifies a resolution of 300 dots per square inch. We can specify a higher resolution or lower resoltution then 300 dpi. A higher resolution will increase the image file size, but will look better when magnified.
End of explanation
"""
# import packages
from statistics import mean, stdev
import matplotlib.pyplot as plt
#include if using a jupyter notebook, remove if using a .py file
%matplotlib inline
# data
ABS = [18.6, 21.6 ,22, 21, 18, 20.9, 21, 19.3, 18.8, 20, 19.4, 16, 23.8, 19.3, 19.7, 19.5]
HIPS = [10.4, 4.9, 10.2, 10.5, 10.9, 12.9, 11.8, 8.4, 10, 10.6, 8.6, 9.7, 10.8, 10.7, 11, 12.4, 13.3, 11.4, 14.8, 13.5]
# find the mean using the mean() function from the statistics library
ABS_mean = mean(ABS)
HIPS_mean = mean(HIPS)
# find the standard deviation using the stdev() function from the statistics library
ABS_stdev = stdev(ABS)
HIPS_stdev = stdev(HIPS)
# build a bar plot and save it as a .png image
plt.bar(['ABS', 'HIPS'],[ABS_mean, HIPS_mean],yerr=[ABS_stdev, HIPS_stdev], capsize=10)
plt.xlabel('3D-printer Fillament Material')
plt.ylabel('Tensile Strength (MPa)')
plt.title('Tensile Strength of 3-D Printed ABS and HIPS Tensile Bars')
plt.savefig('plot.png', dpi = 300)
plt.show()
"""
Explanation: The full script
A summary of the full script is below:
End of explanation
"""
|
ivannz/study_notes
|
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
|
mit
|
G = nx.read_gml( path =
"./data/ha5/huge_100004196072232_2015_03_24_11_20_1d58b0ecdf7713656ebbf1a177e81fab.gml", relabel = False )
"""
Explanation: ToDo
Your Network Summary
Network source and preprocessing
Node/Edge attributes
Size, Order
Gorgeous network layout. Try to show that your network has some structure, play with node sizes and colors, scaling parameters, tools like Gephi may be useful here
Degree distribution, Diameter, Clustering Coefficient
Structural Analysis
Degree/Closeness/Betweenness centralities. Top nodes interpretation
Page-Rank. Comparison with centralities
Assortative Mixing according to node attributes
Node structural equivalence/similarity
Community Detection
Clique search
Best results of various community detection algorithms, both in terms of interpretation and some quality criterion. Since Networkx has no community detection algorithms, use additional modules e.g. igraph, communities, graph-tool, etc
The results should be visible on the network layout or adjacency matrix picture
<center>Structural Analysis and Visualization of Networks</center>
<center>Analysis of facebook graph</center>
<center>Student: Nazarov Ivan</center>
Summary
Network source
This graph shows friend relationships among the people in mu facebook friends list. The newtork was obtained by Netviz facebook app. A purely technical step, but prior to loading with the networkx procedure $\text{read_gml}(\cdot)$ the GML-file was preprocessd to convert UTF-8 encoding into special HTML entities. In fact the problem seems to be rooted in the software used to crawl the facebook network.
Attributes
The nodes have a short list of attributes which are
* gender;
* number of posts on the wall;
* locale, which represents the language setting of that nodes's facebook page.
The network does not have any edge attrbiutes
End of explanation
"""
print "The network G is of the order %d. Its size is %d." % ( G.number_of_nodes( ), G.number_of_edges( ) )
"""
Explanation: The order of a network $G=(V,E)$ is $|V|$ and the size is $|E|$.
End of explanation
"""
deg = G.degree( )
fig = plt.figure( figsize = (12,8) )
axs = fig.add_subplot( 1,1,1, axisbg = 'black' )
nx.draw_networkx( G, with_labels = False, ax = axs,
cmap = plt.cm.Purples, node_color = deg.values( ), edge_color = "magenta",
nodelist = deg.keys( ), node_size = [ 100 * np.log( d + 1 ) for d in deg.values( ) ],
pos = nx.fruchterman_reingold_layout( G ), )
"""
Explanation: Visualisation
It is always good to have a nice and attractive picture in a study.
End of explanation
"""
CC = sorted( nx.connected_components( G ), key = len, reverse = True )
for i, c in enumerate( CC, 1 ):
row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] )
print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" )
"""
Explanation: Let's have a look at connected components, since the plot suggests, that the graph is not connected.
End of explanation
"""
H = G.subgraph( CC[ 0 ] )
print "The largest component is of the order %d. Its size is %d." % ( H.number_of_nodes( ), H.number_of_edges( ) )
"""
Explanation: The largest community connected component represents family, my acquaintances at shool ($\leq 2003$) and in university ($2003-2009$) and the second largest component are people I met at Oxford Royale Summer School in 2012. The one-node are either old acquaintances, select colleagues from work, instructors et c.
Since the largest component is an order of magnitude larger than hte next biggest, I decide to focus just on it, rather than the whole network. In fact this convers almost $\frac{91}{121}\approx 75\%$ of vertices, and $\frac{1030}{1091} \approx 94\%$ of edges.
End of explanation
"""
deg = H.degree( )
fig = plt.figure( figsize = (16, 6) )
axs = fig.add_subplot( 1,2,1, axisbg = 'black', title = "Master cluster", )
pos = nx.fruchterman_reingold_layout( H )
nx.draw_networkx( H, with_labels = False, ax = axs,
cmap = plt.cm.Oranges, node_color = deg.values( ), edge_color = "cyan",
nodelist = deg.keys( ), node_size = [ d * 10 for d in deg.values( ) ],
pos = pos )
## Degree distribution
v, f = np.unique( nx.degree( H ).values( ), return_counts = True)
axs = fig.add_subplot( 1,2,2, xlabel = "Degree", ylabel = "Frequency", title = "Node degree frequency" )
axs.plot( v, f, "ob" )
"""
Explanation: Let's plot the subgraph and study the its degree distribution.
End of explanation
"""
from scipy.stats import rankdata
def mean_excess( data ) :
data = np.array( sorted( data, reverse = True ) )
ranks = rankdata( data, method = 'max' )
excesses = np.array( np.unique( len( data ) - ranks ), dtype = np.int )
thresholds = data[ excesses ]
mean_excess = np.cumsum( data )[ excesses ] / ( excesses + 0.0 ) - thresholds
return thresholds, mean_excess
plt.figure( figsize = ( 8, 6 ) )
u, m = mean_excess( H.degree().values() )
plt.plot( u, m, lw = 2 )
plt.title( "Mean Excess polt of node-degree" )
plt.xlabel( "Threshold" )
plt.ylabel( "Expected excess over the threshold")
"""
Explanation: Degree distribution
A useful tool for exploring the tail behaviour of sample is the Mean Excess plot, defined as the
$$M(u) = \mathbb{E}\Big(\Big. X-u\,\big.\big\rvert\,X\geq u \Big.\Big)$$
of which the emprical counterpart is
$$\hat{M}(u) = {\Big(\sum_{i=1}^n 1_{x_i\geq u}\Big)^{-1}}\sum_{i=1}^n (x_i-u) 1_{x_i\geq u}$$
The key properties of $M(u)$ are
* it steadily increases for power-law tails and the steeper the slope the smaller is the exponent;
* it levels for exponential tails (heurstically: the case when $\alpha\to \infty$ is similar to an exponential tail);
* it decays towards zero for a tail of a compactly supported distribution.
When dealing with the empirical mean-excesses one looks for the trend in the large thresholds to discern behaviour, necessarily bearing in mind that in that region the varinace of the $\hat{M}(u)$ grows.
End of explanation
"""
print "This subgraph's clustering coefficient is %.3f." % nx.average_clustering( H )
print "This subgraph's average shortest path length is %.3f." % nx.average_shortest_path_length( H )
print "The radius (maximal distance) is %d." % nx.radius( H )
"""
Explanation: The Mean Excess plot does seems to indicate that the node degree does not follow a scale free distribution. Indeed, the plot levels off as ita approaches the value $50$. The rightmost spike is in the region where the variance of the estimate of the conditional expectation is extremely high, which is why this artefact of finite sample may be ignored.
Clustering tightness
The average clustering coefficient of a graph $G=(V,E)$ is defined by the following formula :
$$\bar{c} = \frac{1}{n}\sum_{x\in V}c_x$$
where $n=|V|$ and $c_x$ is the local clustering coefficient of vertex $x\in V$ defined below.
The local (trinagular) clustering coefficient of a node $x\in V$ is defined as the ratio of the number of unique edge triangles containing $x$ to the number of unique triangles a vertex has in a complete graph of order $\delta_x$ -- the degree of $x$ in $G$.
The expression for $c_x$ is
$$c_x = \frac{1}{\delta_x (\delta_x-1)} \sum_{u\neq x} \sum_{v\neq x,u} 1_{xu} 1_{uv} 1_{vx} = \frac{1}{\delta_x (\delta_x-1)} #_{x}$$
where $1_{ij}$ is the indicator equal to $1$ if the edge (undirected) $(i,j)\in E$ and $0$ otherwise.
End of explanation
"""
pr = nx.pagerank_numpy( H, alpha = 0.85 )
cb = nx.centrality.betweenness_centrality( H )
cc = nx.centrality.closeness_centrality( H )
cd = nx.centrality.degree_centrality( H )
"""
Explanation: The clustering coefficient is moderately high and any two members in this component are 2 hops away from each other on average. This means that this subgraph has a tightly knit cluster structure, almost a like small world, were it not for the light-tailed degreee distribution.
Structural analysis
Centrality measures
Degree
The degree centrality measure of a node $v\in V$ in graph $G=\big(V, E\big)$ is the sum of all edges incident on it:
$$C_v = \sum_{u\in V} 1_{(v,u)\in E} = \sum_{u\in V} A_{vu} = \delta_v$$
In other words the more 1st-tier (nearest, reachable in one hop) negihbours a vertex has the higher its centrality is.
Betweenness
This measure assesses how important a node is in terms of the global graph connectivity:
$$C_B(v) = \sum_{s\neq v\neq t\in V} \frac{\sigma_{st}(v)}{\sigma_{st}}$$
where $\sigma_{st}(v)$ is the number of shortest paths from $s$ to $t$ passing through $v$, while $\sigma_{st}$ is the total number of paths of least legnth connecting $s$ and $t$.
High local centrality means that a node is in direct contact with many other nodes, whereas low centrality indicates a periphrial vertex.
Alogn with these local measures, compute the centrality closeness and the PageRank ranking.
End of explanation
"""
def assortativity( G, X ) :
## represent the graph in an adjacency matrix form
A = nx.to_numpy_matrix( G, dtype = np.float, nodelist = G.nodes( ) )
## Convert x -- dictionary to a numpy vector
x = np.array( [ X[ n ] for n in G.nodes( ) ] , dtype = np.float )
## Compute the x'Ax part
xAx = np.dot( x, np.array( A.dot( x ) ).flatten( ) )
## and the x'\text{diag}(D)x part. Note that left-multiplying a vector
## by a diagonal matrix is equivalent to element-wise multiplication.
D = np.array( A.sum( axis = 1 ), dtype = np.float ).flatten( )
xDx = np.dot( x, np.multiply( D, x ) )
## numpy.average( ) actually normalizes the weights.
x_bar = np.average( x, weights = D )
D_sum = np.sum( D, dtype = np.float )
return ( xAx - D_sum * x_bar * x_bar ) / ( xDx - D_sum * x_bar * x_bar )
"""
Explanation: the Mixing coefficient
The mixing coefficient for a numerical node attribute $X = \big(x_i\big)$ in an undirected graph $G$, with the adjacency matrix $A$, is defined as
$$\rho(x) = \frac{\text{cov}}{\text{var}} = \frac{\sum_{ij}A_{ij}(x_i-\bar{x})(x_j-\bar{x})}{\sum_{ij}A_{ij}(x_i-\bar{x})^2} $$
where $\bar{x} = \frac{1}{2m}\sum_i \delta_i x_i$ is the mean value of $X$ weighted by vertex degree. Note that $A$ is necessarily symmetric. This coefficient can be represented in the matrix notation as
$$\rho(x) = \frac{X'AX - 2m \bar{x}^2}{X'\text{diag}(D)X - 2m \bar{x}^2} $$
where the diagonal matrix $\text{diag}(D)$ is the matrix of vertex degrees, and the value $\bar{x}$ is the sample mean of the numerical node attribute $X$.
End of explanation
"""
print "PageRank assortativity coefficient: %.3f" % assortativity( H, nx.pagerank_numpy( H, alpha = 0.85 ) )
print "Betweenness centrality assortativity coefficient: %.3f" % assortativity( H, nx.centrality.betweenness_centrality( H ) )
print "Closenesss centrality assortativity coefficient: %.3f" % assortativity( H, nx.centrality.closeness_centrality( H ) )
print "Degree assortativity coefficient: %.3f" % assortativity( H, nx.centrality.degree_centrality( H ) )
print "Gender assortativity coefficient: %.3f" % nx.assortativity.attribute_assortativity_coefficient( H, 'sex' )
print "Agerank assortativity coefficient: %.3f" % assortativity( H, nx.get_node_attributes( H, 'agerank') )
print "Language assortativity coefficient: %.3f" % nx.assortativity.attribute_assortativity_coefficient( H, 'locale' )
print "Number of posts on the wall assortativity coefficient: %.3f" % nx.assortativity.attribute_assortativity_coefficient( H, 'wallcount' )
"""
Explanation: Let's compute the assortativity for the centralities, pagerank vector, vertex degrees and node attributes.
End of explanation
"""
## Print the upper triangle of a symmetric matrix in reverse column order
def show_symmetric_matrix( A, labels, diag = False ) :
d = 0 if diag else 1
c = len( labels ) - d
print "\t", "\t".join( c * [ "%.3s" ] ) % tuple( labels[ d: ][ ::-1 ] )
for i, l in enumerate( labels if diag else labels[ :-1 ] ) :
print ( ( "%4s\t" % l ) + "\t".join( ( c - i ) * [ "%.3f" ] ) %
tuple( rank_dist[ i,i+d: ][ ::-1 ] ) )
"""
Explanation: This component does not show segregation patterns in connectivity, as the computed coefficinets do indicate that neither that "opposites", nor that "kindred spritis" attach. The noticably high values of degree centrality is probably due to the component already having a tight cluster structure.
Node Rankings
It is sometimes interesting to look at a table representation of a symmetric distance matrix. The procedure below prints a matrix in a more straightforward format.
End of explanation
"""
from scipy.spatial.distance import pdist, squareform
from scipy.stats import spearmanr as rho
labels = [ 'btw', 'deg', 'cls', 'prk' ]
align = lambda dd : np.array( [ dd[ n ] for n in H.nodes( ) ], dtype = np.float )
rank_dist = squareform( pdist(
[ align( cb ), align( cd ), align( cc ), align( pr ) ],
metric = lambda a, b : rho(a,b)[0] ) )
show_symmetric_matrix( rank_dist, labels )
"""
Explanation: It actually interesting, to compare the ordering produced by different vertex-ranking algorithms. The most direct way is to analyse pariwise Spearman's $\rho$, since it compares the rank-transformation of one vector of observed data to another.
End of explanation
"""
kcq = list( nx.community.k_clique_communities( H, 3 ) )
"""
Explanation: The rankings match each other very closely!
Commutnity detection
A $k$-clique commutniy detection method considers a set of nodes a community if its maximal clique is of order $k$, all nodes are parto of at least one $k$-clique and all $k$-cliques overlap by at least $k-1$ vertrices.
End of explanation
"""
import lpa
lab = lpa.semisynchronous_prec_max( H )
"""
Explanation: Label propagation algorithm, initially assigns unique labels to each node, and the relabels nodes in random order until stabilization.
New label corresponds to the label, which the largest number of neighbours has.
Code borrowed from lpa.py by Tyler Rush, which can be found at networkx-devel. The procedure is an implementation of the idea in:
* Cordasco, G., & Gargano, L. (2012). Label propagation algorithm: a semi-synchronous approach. International Journal of Social Network Mining, 1(1), 3-26.
End of explanation
"""
def mcl_iter( A, p = 2, alpha = 2, theta = 1e-8, rel_eps = 1e-4, niter = 10000 ) :
## Convert A into a transition kernel: M_{ij} is the probability of making a transition from i to j.
M = np.multiply( 1.0 / A.sum( axis = 1, dtype = np.float64 ).reshape(-1,1), A )
i = 0 ; status = -1
while i < niter :
M_prime = M.copy( )
## Expansion step: M_{ij} is the probability of reaching a vertex j from i in p hops.
M = np.linalg.matrix_power( M, p )
## Pruning: make paths with low transition probability into almost surely unused.
M[ np.abs( M ) < theta ] = 0
## Inflation step: dampen the probabilites
M = np.power( M, alpha )
## Renormalisation step: make the matrix into a stochastic transition kernel
N = M.sum( axis = 1, dtype = np.float64 )
## If a nan is encountered, then abort
if np.any( np.isnan( N ) ) :
status = -2
break
M = np.multiply( 1.0 / N.reshape(-1,1), M )
## Convergence criterion is the L1 norm of relative divergence of transition probabilities
if np.sum( np.abs( M - M_prime ) / ( np.abs( M_prime ) + rel_eps ) ) < rel_eps :
status = 0
break
## Advance to the next iteration
i += 1
return ( M, (status, i) )
def extract_communities( M, lengths = True ) :
## It is extected that the MCL matrix detects communities in columns
C = list( ) ; i0 = 0
if np.any( np.isnan( M ) ) :
return C
## Find all indices of nonzero elements
r, c = np.where( np.array( M ) )
## Sort them by the column index and find the community sizes
r = r[ np.argsort( c ) ]
u = np.unique( c, return_counts = True )
if np.sum( u[ 1 ] ) > M.shape[ 1 ] :
return C
if lengths :
return u[ 1 ]
## Columns indices of nonzero entries are ordered, so we just need to
## sweep across the sizes
for s in u[ 1 ] :
## Row indices for a column with a nonzero element are the indices of
## nodes in the community.
list.append( C, r[ i0:i0+s ] )
i0 += s
return C
def make_labels( com, mapper = None ) :
dd = dict( )
for i, c in enumerate( com, 1 ) :
for k in c :
if mapper is not None :
dd[ mapper[ k ] ] = i
else :
dd[ k ] = i
return dd
"""
Explanation: Markov Cluster Algorithm (MCL).
Input: Transition matrix $T = D^{-1}A$
Output: Adjacency matrix $M^$
1. Set $M = T$
2. repeat:
3. Expansion Step: $M = M^p$ (usually $p=2$)
4. Inflation Step: Raise every entry of $M$ to the power $\alpha$ (usualy $\alpha=2$)
5. Renormalize: Normalize each row by its sum
6. Prunning: Replace entries that are close to $0$ by pure $0$
7. until $M$ converges
8. $M^ = M$
End of explanation
"""
fig = plt.figure( figsize = (12, 8) )
axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster", )
A = nx.to_numpy_matrix( H, dtype = np.float, nodelist = nx.spectral_ordering( H ) )
C, _ = mcl_iter( A )
mcl = extract_communities( C, lengths = False)
axs.spy( A, color = "gold", markersize = 15, marker = '.' )
axs.spy( C, color = "magenta", markersize = 10, marker = '.' )
for i, c in enumerate( kcq, 1 ):
row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] )
print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" )
fig = plt.figure( figsize = (12, 8) )
axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster: 5-clique community", )
kcq = list( nx.community.k_clique_communities( H, 5 ) )
deg = make_labels( kcq )
nx.draw_networkx( H, with_labels = False, ax = axs,
cmap = plt.cm.Reds, node_color = deg.values( ), edge_color = "cyan",
nodelist = deg.keys( ), node_size = 200, pos = pos )
for i, c in enumerate( kcq, 1 ):
row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] )
print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" )
fig = plt.figure( figsize = (12, 8) )
axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster: 7-clique community", )
kcq = list( nx.community.k_clique_communities( H, 7 ) )
deg = make_labels( kcq )
nx.draw_networkx( H, with_labels = False, ax = axs,
cmap = plt.cm.Reds, node_color = deg.values( ), edge_color = "cyan",
nodelist = deg.keys( ), node_size = 200, pos = pos )
for i, c in enumerate( kcq, 1 ):
row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] )
print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" )
fig = plt.figure( figsize = (12, 8) )
axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster: 4-clique communitites", )
kcq = list( nx.community.k_clique_communities( H, 4 ) )
deg = make_labels( kcq )
nx.draw_networkx( H, with_labels = False, ax = axs,
cmap = plt.cm.Reds, node_color = deg.values( ), edge_color = "cyan",
nodelist = deg.keys( ), node_size = 200, pos = pos )
for i, c in enumerate( kcq, 1 ):
row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] )
print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" )
fig = plt.figure( figsize = (12, 8) )
axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster: label propagation", )
deg = make_labels( lab.values() )
nx.draw_networkx( H, with_labels = False, ax = axs,
cmap = plt.cm.Reds, node_color = deg.values( ), edge_color = "cyan",
nodelist = deg.keys( ), node_size = 200, pos = pos )
for i, c in enumerate( lab.values(), 1 ):
row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] )
print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" )
fig = plt.figure( figsize = (12, 8) )
axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster: Markov Clustering", )
mcl = extract_communities( mcl_iter( nx.to_numpy_matrix( H, dtype = np.float ), p = 2, alpha = 2 )[ 0 ], lengths = False)
deg = make_labels( mcl, mapper = H.nodes() )
nx.draw_networkx( H, with_labels = False, ax = axs,
cmap = plt.cm.Reds, node_color = deg.values( ), edge_color = "cyan",
nodelist = deg.keys( ), node_size = 200, pos = pos )
for i, c in enumerate( mcl, 1 ):
row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] )
print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" )
"""
Explanation: Let's check how the Markov Clustering Algorithm fares against $k$-clique, and vertex labelling.
End of explanation
"""
|
google/jax
|
docs/jax-101/07-state.ipynb
|
apache-2.0
|
import jax
import jax.numpy as jnp
class Counter:
"""A simple counter."""
def __init__(self):
self.n = 0
def count(self) -> int:
"""Increments the counter and returns the new value."""
self.n += 1
return self.n
def reset(self):
"""Resets the counter to zero."""
self.n = 0
counter = Counter()
for _ in range(3):
print(counter.count())
"""
Explanation: Stateful Computations in JAX
Authors: Vladimir Mikulik
This section explores how JAX constrains the implementation of stateful programs.
Motivation
In machine learning, program state most often comes in the form of:
* model parameters,
* optimizer state, and
* stateful layers, such as BatchNorm.
Some JAX transformations, most notably jax.jit, impose constraints on the functions they transform. In particular, the function transformed by jax.jit must have no side-effects. This is because any such side-effects will only be executed once, when the python version of the function is run during compilation. These side-effects will not be executed by the compiled function on subsequent runs.
Changing program state is one kind of side-effect. So, if we can't have side effects, how do we update model parameters, the optimizer state, and use stateful layers in our models? This colab will explain this in detail, but the short answer is: with functional programming.
A simple example: Counter
Let's start by looking at a simple stateful program: a counter.
End of explanation
"""
counter.reset()
fast_count = jax.jit(counter.count)
for _ in range(3):
print(fast_count())
"""
Explanation: The n attribute maintains the counter's state between successive calls of count. It is modified as a side effect of calling count.
Let's say we want to count fast, so we jax.jit the count method. (In this example, this wouldn't actually help speed anyway, for many reasons, but treat this as a toy model of wanting to JIT-compile the update of model parameters, where jax.jit makes an enormous difference).
End of explanation
"""
from typing import Tuple
CounterState = int
class CounterV2:
def count(self, n: CounterState) -> Tuple[int, CounterState]:
# You could just return n+1, but here we separate its role as
# the output and as the counter state for didactic purposes.
return n+1, n+1
def reset(self) -> CounterState:
return 0
counter = CounterV2()
state = counter.reset()
for _ in range(3):
value, state = counter.count(state)
print(value)
"""
Explanation: Oh no! Our counter isn't working. This is because the line
self.n += 1
in count is only called once, when JAX compiles the method call. Moreover, since the return value doesn't depend on the arguments to count, once it returns the first 1, subsequent calls to fast_count will always return 1. This won't do. So, how do we fix it?
The solution: explicit state
Part of the problem with our counter was that the returned value didn't depend on the arguments, meaning a constant was "baked into" the compiled output. But it shouldn't be a constant -- it should depend on the state. Well, then why don't we make the state into an argument?
End of explanation
"""
state = counter.reset()
fast_count = jax.jit(counter.count)
for _ in range(3):
value, state = fast_count(state)
print(value)
"""
Explanation: In this new version of Counter, we moved n to be an argument of count, and added another return value that represents the new, updated, state. To use this counter, we now need to keep track of the state explicitly. But in return, we can now safely jax.jit this counter:
End of explanation
"""
from typing import NamedTuple
class Params(NamedTuple):
weight: jnp.ndarray
bias: jnp.ndarray
def init(rng) -> Params:
"""Returns the initial model params."""
weights_key, bias_key = jax.random.split(rng)
weight = jax.random.normal(weights_key, ())
bias = jax.random.normal(bias_key, ())
return Params(weight, bias)
def loss(params: Params, x: jnp.ndarray, y: jnp.ndarray) -> jnp.ndarray:
"""Computes the least squares error of the model's predictions on x against y."""
pred = params.weight * x + params.bias
return jnp.mean((pred - y) ** 2)
LEARNING_RATE = 0.005
@jax.jit
def update(params: Params, x: jnp.ndarray, y: jnp.ndarray) -> Params:
"""Performs one SGD update step on params using the given data."""
grad = jax.grad(loss)(params, x, y)
# If we were using Adam or another stateful optimizer,
# we would also do something like
# ```
# updates, new_optimizer_state = optimizer(grad, optimizer_state)
# ```
# and then use `updates` instead of `grad` to actually update the params.
# (And we'd include `new_optimizer_state` in the output, naturally.)
new_params = jax.tree_map(
lambda param, g: param - g * LEARNING_RATE, params, grad)
return new_params
"""
Explanation: A general strategy
We can apply the same process to any stateful method to convert it into a stateless one. We took a class of the form
```python
class StatefulClass
state: State
def stateful_method(args, *kwargs) -> Output:
```
and turned it into a class of the form
```python
class StatelessClass
def stateless_method(state: State, args, *kwargs) -> (Output, State):
```
This is a common functional programming pattern, and, essentially, is the way that state is handled in all JAX programs.
Notice that the need for a class becomes less clear once we have rewritten it this way. We could just keep stateless_method, since the class is no longer doing any work. This is because, like the strategy we just applied, object-oriented programming (OOP) is a way to help programmers understand program state.
In our case, the CounterV2 class is nothing more than a namespace bringing all the functions that use CounterState into one location. Exercise for the reader: do you think it makes sense to keep it as a class?
Incidentally, you've already seen an example of this strategy in the JAX pseudo-randomness API, jax.random, shown in the Random Numbers section. Unlike Numpy, which manages random state using stateful classes, JAX requires the programmer to work directly with the random generator state -- the PRNGKey.
Simple worked example: Linear Regression
Let's apply this strategy to a simple machine learning model: linear regression via gradient descent.
Here, we only deal with one kind of state: the model parameters. But generally, you'll see many kinds of state being threaded in and out of JAX functions, like optimizer state, layer statistics for batchnorm, and others.
The function to look at carefully is update.
End of explanation
"""
import matplotlib.pyplot as plt
rng = jax.random.PRNGKey(42)
# Generate true data from y = w*x + b + noise
true_w, true_b = 2, -1
x_rng, noise_rng = jax.random.split(rng)
xs = jax.random.normal(x_rng, (128, 1))
noise = jax.random.normal(noise_rng, (128, 1)) * 0.5
ys = xs * true_w + true_b + noise
# Fit regression
params = init(rng)
for _ in range(1000):
params = update(params, xs, ys)
plt.scatter(xs, ys)
plt.plot(xs, params.weight * xs + params.bias, c='red', label='Model Prediction')
plt.legend();
"""
Explanation: Notice that we manually pipe the params in and out of the update function.
End of explanation
"""
|
rolando/scrapydo
|
notebooks/scrapydo-overview.ipynb
|
mit
|
import scrapydo
scrapydo.setup()
"""
Explanation: ScrapyDo Overview
ScrapyDo is a crochet-based blocking API for Scrapy. It allows the usage of Scrapy as a library, mainly aimed to be used in spiders prototyping and data exploration in IPython notebooks.
In this notebook we are going to show how to use scrapydo and how it helps to rapidly crawl and explore data. Our main premise is that we want to crawl the internet as a mean to analysis data and not as an end.
Initialization
The function setup must be called before any call to other functions.
End of explanation
"""
response = scrapydo.fetch("http://httpbin.org/get?show_env=1")
response
"""
Explanation: The fetch function and highlight helper
The fetch function returns a scrapy.Response object for a given URL.
End of explanation
"""
from scrapydo.utils import highlight
highlight(response.body, 'json')
response = scrapydo.fetch("http://httpbin.org")
highlight(response.body[:300])
highlight(response.css('p').extract())
highlight(response.headers, 'python')
"""
Explanation: The highlight function is a helper to highlight text content using the pygments module. It is very useful to inspect text content.
End of explanation
"""
# Some additional imports for our data exploration.
%matplotlib inline
import matplotlib.pylab as plt
import pandas as pd
import seaborn as sns
sns.set(context='poster', style='ticks')
"""
Explanation: The crawl function or how to do spider-less crawling
Here we are going to show to crawl an URL without defining a spider class and only using callback functions. This is very useful for quick crawling and data exploration.
End of explanation
"""
import scrapy
def parse_blog(response):
for url in response.css('ul li a::attr("href")').re(r'/\d\d\d\d/\d\d/$'):
yield scrapy.Request(response.urljoin(url), parse_titles)
def parse_titles(response):
for post_title in response.css('div.entries > ul > li a::text').extract():
yield {'title': post_title}
"""
Explanation: We replicate the example in scrapy.org, by defining two callbacks functions to crawl the website http://blog.scrapinghub.com.
The function parse_blog(response) is going to extract the listing URLs and the function parse_titles(response) is going to extract the post titles from each listing page.
End of explanation
"""
items = scrapydo.crawl('http://blog.scrapinghub.com', parse_blog)
"""
Explanation: Once we have our callback functions for our target website, we simply call to scrapydo.crawl:
End of explanation
"""
df = pd.DataFrame(items)
df['length'] = df['title'].apply(len)
df[:5]
ax = df['length'].plot(kind='hist', bins=11)
ax2 = df['length'].plot(kind='kde', secondary_y=True, ax=ax)
ax2.set(ylabel="density")
ax.set(title="Title length distribution", xlim=(10, 80), ylabel="posts", xlabel="length");
"""
Explanation: Now that we have our data, we can start doing the fun part! Here we show the posts title length distribution.
End of explanation
"""
import os
os.environ['SCRAPY_SETTINGS_MODULE'] = 'dirbot.settings'
"""
Explanation: The run_spider function and running spiders from an existing project
The previous section showed how to do quick crawls to retrieve data. In this section we are going to show how to run spiders from existing scrapy projects, which can be useful for rapid spider prototyping as well as analysing the crawled data from a given spider.
We use a modified dirbot project, which is already accesible through the PYTHONPATH.
End of explanation
"""
import logging
logging.root.setLevel(logging.INFO)
"""
Explanation: We want to see the logging output, just as the scrapy crawl command would do. Hence we set the log level to INFO.
End of explanation
"""
from dirbot.spiders import dmoz
items = scrapydo.run_spider(dmoz.DmozSpider, settings={'CLOSESPIDER_ITEMCOUNT': 500})
"""
Explanation: The function run_spider allows to run any spider class and provide custom settings.
End of explanation
"""
highlight(items[:3], 'python')
from urlparse import urlparse
dmoz_items = pd.DataFrame(items)
dmoz_items['domain'] = dmoz_items['url'].apply(lambda url: urlparse(url).netloc.replace('www.', ''))
ax = dmoz_items.groupby('domain').apply(len).sort(inplace=False)[-10:].plot(kind='bar')
ax.set(title="Top 10 domains")
plt.setp(ax.xaxis.get_majorticklabels(), rotation=30);
"""
Explanation: In this way, we have less friction to use scrapy to data mine the web and quickly start exploring our data.
End of explanation
"""
|
mediagestalt/Adding-Context
|
Adding Context to Word Frequency Counts.ipynb
|
mit
|
# This is where the modules are imported
import nltk
from os import listdir
from os.path import splitext
from os.path import basename
from tabulate import tabulate
# These functions iterate through the directory and create a list of filenames
def list_textfiles(directory):
"Return a list of filenames ending in '.txt'"
textfiles = []
for filename in listdir(directory):
if filename.endswith(".txt"):
textfiles.append(directory + "/" + filename)
return textfiles
def remove_ext(filename):
"Removes the file extension, such as .txt"
name, extension = splitext(filename)
return name
def remove_dir(filepath):
"Removes the path from the file name"
name = basename(filepath)
return name
def get_filename(filepath):
"Removes the path and file extension from the file name"
filename = remove_ext(filepath)
name = remove_dir(filename)
return name
# These functions work on the content of the files
def read_file(filename):
"Read the contents of FILENAME and return as a string."
infile = open(filename)
contents = infile.read()
infile.close()
return contents
def count_in_list(item_to_count, list_to_search):
"Counts the number of a specified word within a list of words"
number_of_hits = 0
for item in list_to_search:
if item == item_to_count:
number_of_hits += 1
return number_of_hits
"""
Explanation: Adding Context to Word Frequency Counts
While the raw data from word frequency counts is compelling, it does little but describe quantitative features of the corpus. In order to determine if the statistics are indicative of a trend in word usage we must add value to the word frequencies. In this exercise we will produce a ratio of the occurences of privacy to the number of words in the entire corpus. Then we will compare the occurences of privacy to the indivudal number of transcripts within the corpus. This data will allow us identify trends that are worthy of further investigation.
Finally, we will determine the number of words in the corpus as a whole and investigate the 50 most common words by creating a frequency plot. The last statistic we will generate is the type/token ratio, which is a measure of the variability of the words used in the corpus.
Part 1: Determining a ratio
To add context to our word frequency counts, we can work with the corpus in a number of different ways. One of the easiest is to compare the number of words in the entire corpus to the frequency of the word we are investigating.
Let's begin by calling on all the <span style="cursor:help;" title="a set of instructions that performs a specific task"><b>functions</b></span> we will need. Remember that the first few sentences are calling on pre-installed <i>Python</i> <span style="cursor:help;" title="packages of functions and code that serve specific purposes"><b>modules</b></span>, and anything with a def at the beginning is a custom function built specifically for these exercises. The text in red describes the purpose of the function.
End of explanation
"""
filenames = []
for files in list_textfiles('../Counting Word Frequencies/data'):
files = get_filename(files)
filenames.append(files)
corpus = []
for filename in list_textfiles('../Counting Word Frequencies/data'):
text = read_file(filename)
words = text.split()
clean = [w.lower() for w in words if w.isalpha()]
corpus.append(clean)
"""
Explanation: In the next piece of code we will cycle through our directory again: first assigning readable names to our files and storing them as a list in the variable filenames; then we will remove the case and punctuation from the text, split the words into a list of tokens, and assign the words in each file to a list in the variable corpus.
End of explanation
"""
for words, names in zip(corpus, filenames):
print("Instances of the word \'privacy\' in", names, ":", count_in_list("privacy", words))
"""
Explanation: Here we recreate our list from the last exercise, counting the instances of the word privacy in each file.
End of explanation
"""
for files, names in zip(corpus, filenames):
print("There are", len(files), "words in", names)
"""
Explanation: Next we use the len function to count the total number of words in each file.
End of explanation
"""
print("Ratio of instances of privacy to total number of words in the corpus:")
for words, names in zip(corpus, filenames):
print('{:.6f}'.format(float(count_in_list("privacy", words))/(float(len(words)))),":",names)
"""
Explanation: Now we can calculate the ratio of the word privacy to the total number of words in the file. To accomplish this we simply divide the two numbers.
End of explanation
"""
raw = []
for i in range(len(corpus)):
raw.append(count_in_list("privacy", corpus[i]))
ratio = []
for i in range(len(corpus)):
ratio.append('{:.3f}'.format((float(count_in_list("privacy", corpus[i]))/(float(len(corpus[i])))) * 100))
table = zip(filenames, raw, ratio)
"""
Explanation: Now our descriptive statistics concerning word frequencies have added value. We can see that there has indeed been a steady increase in the frequency of the use of the word privacy in our corpus. When we investigate the yearly usage, we can see that the frequency almost doubled between 2008 and 2009, as well as dramatic increase between 2012 and 2014. This is also apparent in the difference between the 39th and the 40th sittings of Parliament.
Let's package all of the data together so it can be displayed as a table or exported to a CSV file. First we will write our values to a list: raw contains the raw frequencies, and ratio contains the ratios. Then we will create a <span style="cursor:help;" title="a type of list where the values are permanent"><b>tuple</b></span> that contains the filename variable and includes the corresponding raw and ratio variables. Here we'll generate the ratio as a percentage.
End of explanation
"""
print(tabulate(table, headers = ["Filename", "Raw", "Ratio %"], floatfmt=".3f", numalign="left"))
"""
Explanation: Using the tabulate module, we will display our tuple as a table.
End of explanation
"""
import csv
with open('privacyFreqTable.csv','wb') as f:
w = csv.writer(f)
w.writerows(table)
"""
Explanation: And finally, we will write the values to a CSV file called privacyFreqTable.
End of explanation
"""
corpus_1 = []
for filename in list_textfiles('../Counting Word Frequencies/data'):
text = read_file(filename)
words = text.split(" OFFICIAL REPORT (HANSARD)")
corpus_1.append(words)
"""
Explanation: Part 2: Counting the number of transcripts
Another way we can provide context is to process the corpus in a different way. Instead of splitting the data by word, we will split it in larger chunks pertaining to each individual transcript. Each transcript corresponds to a unique debate but starts with exactly the same formatting, making the files easy to split. The text below shows the beginning of a transcript. The first words are OFFICIAL REPORT (HANSARD).
<img src="hansardText.png">
Here we will pass the files to another variable, called corpus_1. Instead of removing capitalization and punctuation, all we will do is split the files at every occurence of OFFICIAL REPORT (HANSARD).
End of explanation
"""
for files, names in zip(corpus_1, filenames):
print("There are", len(files), "files in", names)
"""
Explanation: Now, we can count the number of files in each dataset. This is also an important activity for error-checking. While it is easy to trust the numerical output of the code when it works sucessfully, we must always be sure to check that the code is actually performing in exactly the way we want it to. In this case, these numbers can be cross-referenced with the original XML data, where each transcript exists as its own file. A quick check of the directory shows that the numbers are correct.
End of explanation
"""
for names, files, words in zip(filenames, corpus_1, corpus):
print("In", names, "there were", len(files), "debates. The word privacy was said", \
count_in_list('privacy', words), "times.")
"""
Explanation: Here is a screenshot of some of the raw data. We can see that there are <u>97</u> files in 2006, <u>117</u> in 2007 and <u>93</u> in 2008. The rest of the data is also correct.
<img src="filecount.png">
Now we can compare the amount of occurences of privacy with the number of debates occuring in each dataset.
End of explanation
"""
corpus_3 = []
for filename in list_textfiles('../Counting Word Frequencies/data2'):
text = read_file(filename)
words = text.split()
clean = [w.lower() for w in words if w.isalpha()]
corpus_3.append(clean)
"""
Explanation: These numbers confirm our earlier results. There is a clear indication that the usage of the term privacy is increasing, with major changes occuring between the years 2008 and 2009, as well as between 2012 and 2014. This trend is also clearly obervable between the 39th and 40th sittings of Parliament.
Part 3: Looking at the corpus as a whole
While chunking the corpus into pieces can help us understand the distribution or dispersion of words throughout the corpus, it's valuable to look at the corpus as a whole. Here we will create a third corpus variable corpus_3 that only contains the files named 39, 40, and 41. Note the new directory named data2. We only need these files; if we used all of the files we would literally duplicate the results.
End of explanation
"""
large = list(sum(corpus_3, []))
"""
Explanation: Now we will combine the three lists into one large list and assign it to the variable large.
End of explanation
"""
print("There are", count_in_list('privacy', large), "occurences of the word 'privacy' and a total of", \
len(large), "words.")
print("The ratio of instances of privacy to total number of words in the corpus is:", \
'{:.6f}'.format(float(count_in_list("privacy", large))/(float(len(large)))), "or", \
'{:.3f}'.format((float(count_in_list("privacy", large))/(float(len(large)))) * 100),"%")
"""
Explanation: We can use the same calculations to determine the total number of occurences of privacy, as well as the total number of words in the corpus. We can also calculate the total ratio of privacy to the total number of words.
End of explanation
"""
print("There are", (len(set(large))), "unique words in the Hansard corpus.")
"""
Explanation: Another type of word frequency statistic we can generate is a type/token ratio. The types are the total number of unique words in the corpus, while the tokens are the total number of words. The type/token ratio is used to determine the variability of the language used in the text. The higher the ratio, the more complex the text will be. First we'll determine the total number of types, using <i>Python's</i> set function.
End of explanation
"""
print("The type/token ratio is:", ('{:.6f}'.format(len(set(large))/(float(len(large))))), "or",\
'{:.3f}'.format(len(set(large))/(float(len(large)))*100),"%")
"""
Explanation: Now we can divide the types by the tokens to determine the ratio.
End of explanation
"""
text = nltk.Text(large)
fd = nltk.FreqDist(text)
"""
Explanation: Finally, we will use the NLTK module to create a graph that shows the top 50 most frequent words in the Hansard corpus. Although privacy will not appear in the graph, it's always interesting to see what types of words are most common, and what their distribution is. NLTK will be introduced with more detail in the next section featuring concordance outputs, but here all we need to know is that we assign our variable large to the NLTK function Text in order to work with the corpus data. From there we can determine the frequency distribution for the whole text.
End of explanation
"""
%matplotlib inline
fd.plot(50,cumulative=False)
"""
Explanation: Here we will assign the frequency distribution to the plot function to produce a graph. While it's a little hard to read, the most commonly used word in the Hansard corpus is the, with a frequency just over 400,000 occurences. The next most frequent word is to, which only has a frequency of about 225,000 occurences, almost half of the first most common word. The first 10 most frequent words appear with a much greater frequency than any of the other words in the corpus.
End of explanation
"""
fd.hapaxes()
"""
Explanation: Another feature of the NLTK frequency distribution function is the generation of a list of hapaxes. These are words that appear only once in the entire corpus. While not meaningful for this study, it's an interesting way to explore the data.
End of explanation
"""
|
dataspecialiste/sagacite
|
DSE220x-MLFundamentals/Week-1/NN_spine/Nearest_neighbor_spine.ipynb
|
mit
|
import numpy as np
"""
Explanation: Nearest neighbor for spine injury classification
In this homework notebook we use nearest neighbor classification to classify back injuries for patients in a hospital, based on measurements of the shape and orientation of their pelvis and spine.
The data set contains information from 310 patients. For each patient, there are: six measurements (the x) and a label (the y). The label has 3 possible values, ’NO’ (normal), ’DH’ (herniated disk), or ’SL’ (spondilolysthesis).
Note: Before attempting this homework, please go through the <font color="magenta">Nearest neighbor for handwritten digit recognition</font> notebook.
1. Setup notebook
We import all necessary packages for the homework. Notice that we do NOT import any of the sklearn packages. This is because we want you to implement a nearest neighbor classifier manually, as in the <font color="magenta">Nearest neighbor for handwritten digit recognition</font> notebook.
End of explanation
"""
# Load data set and code labels as 0 = ’NO’, 1 = ’DH’, 2 = ’SL’
labels = [b'NO', b'DH', b'SL']
data = np.loadtxt('column_3C.dat', converters={6: lambda s: labels.index(s)} )
# Separate features from labels
x = data[:,0:6]
y = data[:,6]
# Divide into training and test set
training_indices = list(range(0,20)) + list(range(40,188)) + list(range(230,310))
test_indices = list(range(20,40)) + list(range(188,230))
trainx = x[training_indices,:]
trainy = y[training_indices]
testx = x[test_indices,:]
testy = y[test_indices]
"""
Explanation: We now load the dataset. We divide the data into a training set of 248 patients and a separate test set of 62 patients. The following arrays are created:
trainx : The training data's features, one point per row.
trainy : The training data's labels.
testx : The test data's features, one point per row.
testy : The test data's labels.
We will use the training set (trainx and trainy), with nearest neighbor classification, to predict labels for the test data (testx). We will then compare these predictions with the correct labels, testy.
Notice that we code the three labels as 0. = ’NO’, 1. = ’DH’, 2. = ’SL’.
End of explanation
"""
# Modify this Cell
def NN_L2(trainx, trainy, testx):
# inputs: trainx, trainy, testx <-- as defined above
# output: an np.array of the predicted values for testy
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: 2. Nearest neighbor classification with L2 distance
In this exercise we will build a nearest neighbor classifier based on L2 (Euclidean) distance.
<font color="magenta">For you to do:</font> Write a function, NN_L2, which takes as input the training data (trainx and trainy) and the test points (testx) and predicts labels for these test points using 1-NN classification. These labels should be returned in a numpy array with one entry per test point. For NN_L2, the L2 norm should be used as the distance metric.
<font style="color:blue"> Code</font>
```python
test function
testy_L2 = NN_L2(trainx, trainy, testx)
print( type( testy_L2) )
print( len(testy_L2) )
print( testy_L2[40:50] )
```
<font style="color:magenta"> Output</font>
<class 'numpy.ndarray'>
62
[ 2. 2. 1. 0. 0. 2. 0. 0. 0. 0.]
End of explanation
"""
testy_L2 = NN_L2(trainx, trainy, testx)
assert( type( testy_L2).__name__ == 'ndarray' )
assert( len(testy_L2) == 62 )
assert( np.all( testy_L2[50:60] == [ 0., 0., 0., 0., 2., 0., 2., 0., 0., 0.] ) )
assert( np.all( testy_L2[0:10] == [ 0., 0., 0., 1., 1., 0., 1., 0., 0., 1.] ) )
"""
Explanation: After you are done, run the cell below to check your function. If an error is triggered, you should go back and revise your function.
End of explanation
"""
# Modify this Cell
def NN_L1(trainx, trainy, testx):
# inputs: trainx, trainy, testx <-- as defined above
# output: an np.array of the predicted values for testy
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: 3. Nearest neighbor classification with L1 distance
We now compute nearest neighbors using the L1 distance (sometimes called Manhattan Distance).
<font color="magenta">For you to do:</font> Write a function, NN_L1, which again takes as input the arrays trainx, trainy, and testx, and predicts labels for the test points using 1-nearest neighbor classification. For NN_L1, the L1 distance metric should be used. As before, the predicted labels should be returned in a numpy array with one entry per test point.
Notice that NN_L1 and NN_L2 may well produce different predictions on the test set.
<font style="color:blue"> Code</font>
```python
test function
testy_L2 = NN_L2(trainx, trainy, testx)
testy_L1 = NN_L1(trainx, trainy, testx)
print( type( testy_L1) )
print( len(testy_L1) )
print( testy_L1[40:50] )
print( all(testy_L1 == testy_L2) )
```
<font style="color:magenta"> Output</font>
<class 'numpy.ndarray'>
62
[ 2. 2. 0. 0. 0. 0. 0. 0. 0. 0.]
False
End of explanation
"""
testy_L1 = NN_L1(trainx, trainy, testx)
testy_L2 = NN_L2(trainx, trainy, testx)
assert( type( testy_L1).__name__ == 'ndarray' )
assert( len(testy_L1) == 62 )
assert( not all(testy_L1 == testy_L2) )
assert( all(testy_L1[50:60]== [ 0., 2., 1., 0., 2., 0., 0., 0., 0., 0.]) )
assert( all( testy_L1[0:10] == [ 0., 0., 0., 0., 1., 0., 1., 0., 0., 1.]) )
"""
Explanation: Again, use the following cell to check your code.
End of explanation
"""
def error_rate(testy, testy_fit):
return float(sum(testy!=testy_fit))/len(testy)
print("Error rate of NN_L1: ", error_rate(testy,testy_L1) )
print("Error rate of NN_L2: ", error_rate(testy,testy_L2) )
"""
Explanation: 4. Test errors and the confusion matrix
Let's see if the L1 and L2 distance functions yield different error rates for nearest neighbor classification of the test data.
End of explanation
"""
# Modify this cell
def confusion(testy,testy_fit):
# inputs: the correct labels, the fitted NN labels
# output: a 3x3 np.array representing the confusion matrix as above
### BEGIN SOLUTION
### END SOLUTION
"""
Explanation: We will now look a bit more deeply into the specific types of errors made by nearest neighbor classification, by constructing the <font color="magenta">confusion matrix</font>.
Since there are three labels, the confusion matrix is a 3x3 matrix whose rows correspond to the true label and whose columns correspond to the predicted label. For example, the entry at row DH, column SL, contains the number of test points whose correct label was DH but which were classified as SL.
<img style="width:200px" src="confusion_matrix.png">
Write a function, confusion, which takes as input the true labels for the test set (that is, testy) as well as the predicted labels and returns the confusion matrix. The confusion matrix should be a np.array of shape (3,3) .
<font style="color:blue"> Code</font>
python
L2_neo = confusion(testy, testy_L2)
print( type(L2_neo) )
print( L2_neo.shape )
print( L2_neo )
<font style="color:magenta"> Output</font>
<class 'numpy.ndarray'>
(3, 3)
[[ 0. 9. 2.]
[ 0. 0. 0.]
[ 3. 0. 0.]]
End of explanation
"""
# Test Function
L1_neo = confusion(testy, testy_L1)
assert( type(L1_neo).__name__ == 'ndarray' )
assert( L1_neo.shape == (3,3) )
assert( np.all(L1_neo == [[ 0., 2., 2.],[ 10., 0., 0.],[ 0., 0., 0.]]) )
L2_neo = confusion(testy, testy_L2)
assert( np.all(L2_neo == [[ 0., 1., 2.],[ 10., 0., 0.],[ 0., 0., 0.]]) )
"""
Explanation: Now check your code by running the following cell.
End of explanation
"""
|
gshguru/uwseds
|
Homework1/analysis/Homework1 - Analysis.ipynb
|
mit
|
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import seaborn
seaborn.set()
matplotlib.rcParams['figure.figsize'] = (15, 8)
import numpy as np
import pandas as pd
data = pd.read_csv("../data/4xy5-26gy.csv", parse_dates=['date'], index_col=['date'])
data.head()
"""
Explanation: Homework 1: Data basics
Due: 1/12/2016
Problem:
Obtain the CSV (comma separated variable) file containing the counts of bicycles crossing the Fremont Bridge since 2012 (as described here).
Create a project directory with subdirectories for data and analysis, and create a README file.
Download the data from here. Put the it into the data directory.
Create an iPython Notebook to analyze these data.
In the notebook:
Isolate the 2015 data, creating fields for date, hour, and count;
use python matplotlib to plot the counts by hour;
compute the hourly average counts;
determine what is the busiest hour of the day.
End of explanation
"""
data['datetime'] = data.index.copy()
data.head()
data['year'] = data['datetime'].apply(lambda x: x.year)
data.head()
data = data[data['year'] == 2015]
data = data.drop('year',1)
data['date'] = data['datetime'].apply(lambda x: x.date())
data.head()
data['hour'] = data['datetime'].apply(lambda x: x.time().hour)
data.head()
date_hour_data = data.groupby(['date','hour']).sum()
date_hour_data.head()
"""
Explanation: 1. Isolate the 2015 data, creating fields for date, hour, and count;
End of explanation
"""
hour_data = data.groupby('hour').sum()
ax = hour_data.plot(kind='bar',title='Rides/Hour', width=0.8)
ax.set_ylabel("Bicycle counts")
"""
Explanation: 2. use python matplotlib to plot the counts by hour
End of explanation
"""
hour_mean = data.groupby('hour').mean()
hour_mean
"""
Explanation: ##3. compute the hourly average counts
End of explanation
"""
hour_mean.sort('fremont_bridge_nb', ascending=False).head(5)
hour_mean.sort('fremont_bridge_sb', ascending=False).head(5)
"""
Explanation: 4. Determine what is the busiest hour of the day
End of explanation
"""
|
kfollette/AST337-Fall2017
|
Labs/Lab10/Lab10.ipynb
|
mit
|
# The standard fare, plus a few extra packages:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import astropy.io.fits as fits
import os.path
%matplotlib inline
# Newer packages:
from astropy.stats import mad_std
from astropy.stats import sigma_clip
from photutils.utils import calc_total_error
import astropy.stats as stat
from photutils import aperture_photometry, CircularAperture, CircularAnnulus, DAOStarFinder
"""
Explanation: Lab 10 - Calibrating Photometry and Creating a CMD
Names:
Overview
In the previous lab, you extracted aperture photometry for all of the stars in a given image, and experimented with different parameters to understand how identifying stars and measuring their fluxes can change based on how you choose various parameters. This week, we want to use that information to do the following:
Calculate the instrumental magnitudes of the standard stars and M52 stars, based on your flux measurements.
Determine the zeropoint: the difference between the instrumental magnitude of a standard star and its known, calibated magnitude value in a given photometric system.
Use the zeropoint to calibrate the photometry of the cluster stars.
Estimate the errors on our measured fluxes, and propagate these errors into our final magnitude calculations.
Plot a V vs. V-R color-magnitude diagram for M52!
Data for this Lab: M52 Images from Homework 10
In this lab, you will work with the aligned V-band and R-band images of M52 that you created for Homework 10. Start by downloading your two final aligned M52 FITS files into this directory.
10.1 - Identifying the Standard Star
We've provided reduced, aligned frames of the standard star for the M52 dataset, SA 114637.
The first step is to identify the standard star in the provided reduced images. Use iObserve to look up the star and get a DSS image roughly the same size as the Smith detector. You may have to flip the image N-S or E-W to match the orientation of the Smith images.
(1) Include a screenshot of the DSS image from iObserve below.
(2) Decide which of the stars in the Smith images (either V or R band -- they are aligned to each other) is the standard star. Check your answer with one other neighboring group. Place a circle region around the star in the Smith image and put a screenshot below.
(3) What are the x-y coordinates of the standard star in the Smith image?
Answer:
10.2 - Aperture Photometry on the Standard Star and M52
Following the work from the previous lab, you will extract the photometry for each of the final reduced images.
End of explanation
"""
def bg_error_estimate(fitsfile):
"""
Edit this docstring
"""
fitsdata = fits.getdata(fitsfile)
hdr = fits.getheader(fitsfile)
# What is happening in the next step? Read the docstring for sigma_clip.
# Answer:
filtered_data = sigma_clip(fitsdata, sigma=3.,copy=False)
# Summarize the following steps:
#
#
#
bkg_values_nan = filtered_data.filled(fill_value=np.nan)
bkg_error = np.sqrt(bkg_values_nan)
bkg_error[np.isnan(bkg_error)] = np.nanmedian(bkg_error)
print("Writing the background-only error image: ", fitsfile.split('.')[0]+"_bgerror.fit")
fits.writeto(fitsfile.split('.')[0]+"_bgerror.fit", bkg_error, hdr, overwrite=True)
effective_gain = 1.4 # electrons per ADU
error_image = calc_total_error(fitsdata, bkg_error, effective_gain)
print("Writing the total error image: ", fitsfile.split('.')[0]+"_error.fit")
fits.writeto(fitsfile.split('.')[0]+"_error.fit", error_image, hdr, overwrite=True)
return error_image
"""
Explanation: Creating Photometry Functions
We would like our photometry module to have two functions:
(1) A function to extract the stars in a given image,
and
(2) A function to perform photometry on that image, with adjustable parameters, which provides errors on the measurements.
Measuring Background Error
The first step is to use a new function (provided below) to estimate the background level. Comment this function and add a docstring, and we will apply it shortly.
End of explanation
"""
# Star extraction function -- this function can be to also return the x and y positions to the notebook to use later:
# target_filter_xpos, target_filter_ypos = starExtractor("image.fit", nsigma_value=#, fwhm_value=#)
def starExtractor(fitsfile, nsigma_value, fwhm_value):
"""
This is an incomplete function! Asterisks denote a step where you need to complete the code.
Also, replace this with your docstring, including how to use the function.
"""
# First, check if the region file exists yet, so it doesn't get overwritten
regionfile = fitsfile.split(".")[0] + ".reg"
if os.path.exists(regionfile) == True:
print(regionfile, "already exists in this directory. Rename or remove the .reg file and run again.")
return
# *** Read in the data from the fits file ***
image =
# *** Measure the median absolute standard deviation of the image: ***
bkg_sigma =
# *** Define the parameters for DAOStarFinder ***
daofind =
# Apply DAOStarFinder to the image
sources = daofind(image)
nstars = len(sources)
print("Number of stars found in ",fitsfile,":", nstars)
# Define arrays of x-position and y-position
xpos = np.array(sources['xcentroid'])
ypos = np.array(sources['ycentroid'])
# Write the positions to a .reg file based on the input file name
if os.path.exists(regionfile) == False:
f = open(regionfile, 'w')
for i in range(0,len(xpos)):
f.write('circle '+str(xpos[i])+' '+str(ypos[i])+' '+str(fwhm_value)+'\n')
f.close()
print("Wrote ", regionfile)
return xpos, ypos # Return the x and y positions of each star as variables
# Photometry function, which returns a table of photometry values for a list of stars
# This function can be used as
# target_phot_table = measurePhotometry(file, star_xpos, star_ypos, aperture_radius, sky_inner, sky_outer, error_array)
def measurePhotometry(fitsfile, star_xpos, star_ypos, aperture_radius, sky_inner, sky_outer, error_array):
"""
Add a docstring here. Add comments at the # below
"""
# *** Read in the data from the fits file:
image =
starapertures = CircularAperture((star_xpos, star_ypos),r = aperture_radius)
skyannuli = CircularAnnulus((star_xpos, star_ypos), r_in = sky_inner, r_out = sky_outer)
phot_apers = [starapertures, skyannuli]
# What is new about the way we're calling aperture_photometry?
# *** Add descriptive comments here
phot_table = aperture_photometry(image, phot_apers, error=error_array)
# Calculate mean background in annulus and subtract from aperture flux
bkg_mean = phot_table['aperture_sum_1'] / skyannuli.area()
bkg_starap_sum = bkg_mean * starapertures.area()
final_sum = phot_table['aperture_sum_0']-bkg_starap_sum
phot_table['bg_subtracted_star_counts'] = final_sum
# *** Add descriptive comments here.
bkg_mean_err = phot_table['aperture_sum_err_1'] / skyannuli.area()
bkg_sum_err = bkg_mean_err * starapertures.area()
# *** Add descriptive comments here.
phot_table['bg_sub_star_cts_err'] = np.sqrt((phot_table['aperture_sum_err_0']**2)+(bkg_sum_err**2))
return phot_table
"""
Explanation: (1): What is the purpose of the calc_total_error step? How does it relate to the CCD equation? (The docstring and the section on Error Estimation at https://photutils.readthedocs.io/en/stable/aperture.html are helpful resources here.)
Answer:
Part (a): Creating functions for a photometry module
In the following two cells are incomplete functions that you will edit and run. You may need to refer to Lab 9 to remember how to complete various steps. Anywhre you see asterisks are places you will need to edit. Comment as necessary -- some places have prompts for you to comment specifically.
End of explanation
"""
# Measure the background of the image
std_V_bgerror = bg_error_estimate("std114637_V_stack.fit")
"""
Explanation: Part (b): Extracting photometry for the standard star images
(1) Take a look at the standard star images. The data for SA 114637 are not the highest quality. When examining the reduced images, also think back to the conditions during our first observing night at Smith. What factors might be affecting image quality and how?
Answer:
In the cells below, we will complete the following steps for the reduced standard star V-band image:
* Measure the background of the image
* Extract the star positions
* Use the star positions and background error to measure the photometry
End of explanation
"""
# Extract the star positions. Replace ?? with values for the extraction parameters that capture the stars of interest.
std_V_xpos, std_V_ypos = starExtractor("std114637_V_stack.fit", nsigma_value=??, fwhm_value=??)
"""
Explanation: (2): Open the resulting image, std114637_Vstack_error.fit, in ds9. What do you notice about the image? Describe quantitatively.
Answer:
End of explanation
"""
# Measure photometry for the V band image. Replace ?? with reasonable values
std_V_phottable = measurePhotometry("std114637_V_stack.fit", star_xpos=std_V_xpos, star_ypos=std_V_ypos, \
aperture_radius=??, sky_inner=??, sky_outer=??, error_array=std_V_bgerror)
"""
Explanation: (3): Open the standard star image in ds9 and load the regions it created. Was your standard star identified correctly? If not, why/what did you have to change? (Pause and check at this point.)
Answer:
End of explanation
"""
# Print a single row from the array with the standard star only
std_V_phottable[??]
"""
Explanation: (4): Check the resulting photometry table below. What index is your standard star? (Refer back to the earlier portion of the lab where you measured the standard star position in the image)
Answer:
End of explanation
"""
# Measure the background of the R image
# Measure photometry for the R band image.
# NOTE: Use std_V_xpos and std_V_ypos to extract photometry for the same stars in the same locations & order!
# Print a single row from the array with the standard star only
std_V_phottable[??]
"""
Explanation: In the following cells, repeat the above procedure for the standard star image, this time in R band.
Important Note:
Now that we have the positions in V band, we can skip the extraction step in the other filter (since the images are aligned to each other!)
End of explanation
"""
columns = ['id','xcenter', 'ycenter','Vflux','Vfluxerr','Rflux','Rfluxerr']
std_fluxtable = pd.DataFrame(
{'id' : std_V_phottable['id'],
'xcenter' : std_V_phottable['xcenter'],
'ycenter' : std_V_phottable['ycenter'],
'Vflux' : std_V_phottable['bg_subtracted_star_counts'],
'Vfluxerr': std_V_phottable['bg_sub_star_cts_err'],
'Rflux' : std_R_phottable['bg_subtracted_star_counts'],
'Rfluxerr': std_R_phottable['bg_sub_star_cts_err']}, columns=columns)
# Below, check the dataframe to ensure that the combination worked:
std_fluxtable.head()
"""
Explanation: We can now stitch together the results into a streamlined pandas dataframe, by defining the dataframe index and column labels of interest:
End of explanation
"""
# M52 V-band analysis -- add any blank cells as needed
# Measure the background of the image
M52_V_bgerror =
# Extract the star positions and save them to new variables.
# Do you have too few stars? Too many? Check the quality of the extraction.
M52_V_xpos, M52_V_ypos =
"""
Explanation: Part (c): Extracting photometry for the M52 cluster stars
In the cells below, you will repeat the process with the actual cluster images you made for M52 in both V and R bands. Adjust the function inputs (e.g., star extraction) and the photometry parameters accordingly, and we will also estimate the combined errors in the final image and save these results as FITS files.
In the last cell, create a single pandas flux dataframe for the M52 cluster image (M52_fluxtable, just like the std_fluxtable). We will use these to calibrate the photometry in the next section.
End of explanation
"""
# Measure photometry
M52_V_phottable =
# Check the M52 photometry table
M52_V_phottable
# Follow the same approach for the the R band images, using the M52 V band star locations.
# Add blank cells below as needed.
M52_R_bgerror =
M52_R_phottable =
# Finally, combine the photometry into a single pandas dataframe for M52.
M52_fluxtable =
# Check the contents of the new dataframe here
"""
Explanation: Once you have decided on extraction parameters, include a screenshot here of the ds9 image of M52 with the regions overlaid.
[screenshot]
End of explanation
"""
# Your code here for the standard star dataframe
# Check that new columns were added
std_fluxtable.head()
# Your code here for the M52 dataframe
# Check that new columns were added
M52_fluxtable.head()
"""
Explanation: 10.3 - Calculating Instrumental Magnitudes and Measuring Zeropoints
Note that when calculating zeropoint, units are in flux per second, so it's critically important than we divide by exposure time of the image.
(1): What would happen if we simply compared the measured fluxes without performing this scaling for exposure time first? By what factor would our estimates of the cluster star magnitudes be incorrect?
Answer:
Recall the relation for instrumental magnitude:
$m_{inst} = -2.5 log_{10}(\textit{flux in 1-second})$
Part (a): Update the pandas dataframes for both the standard star and M52 images, by retrieving the exposure times from the FITS headers and adding new columns, "Vflux_1sec" and "Rflux_1sec", and "Vflux_1sec_err" and "Rflux_1sec_err":
End of explanation
"""
# Your code here
"""
Explanation: Part (b): Calculate the instrumental magnitudes (label them "V_inst" and "R_inst"), and also add these as new columns to your ever-growing standard star and M52 dataframes.
End of explanation
"""
# Your code here
# Check the tables again in the following cells
# Print the row in the standard star table corresponding to the standard star
"""
Explanation: Part (c): Propagate errors on the fluxes into errors on the instrumental magnitudes:
We discussed in lecture that the uncertainty propagation for log values goes as follows:
If $x = k * log(a)$, where is $k$ is a constant value, then we can evaluate the uncertainty on $x$, that is $\sigma_{x}$ as:
$\sigma_{x} = k * 0.434 \frac{\sigma_{a}}{a}$
Add these to your two pandas dataframes as new columns,
"Vinst_err", "Rinst_err"
End of explanation
"""
magzp_V =
magzp_V_error =
magzp_R =
magzp_R_error =
print("Zeropoint in V: ", magzp_V, "+/-", magzp_V_error)
print("Zeropoint in R: ", magzp_R, "+/-", magzp_R_error)
"""
Explanation: Part(d): Look up the actual magnitude values for SA 114637 using the Simbad Astronomical Database:
http://simbad.u-strasbg.fr/simbad/sim-fid
(2) What are the V and R band magnitudes for this standard star, and what are their uncertainties? (bracketed values)
Answer:
Now we can use these values for the standard star to determine the zeropoints, as follows:
$m_{calibrated} = m_{inst} + zeropoint$
So to determine the zeropoints, we will need to propagate uncertainties. (Writing zeropoint as magzp):
$magzp \pm \sigma_{magzp}$ = ($m_{calib} \pm \sigma_{m_{calib}}$) - ($m_{inst} \pm \sigma_{m_{inst}}$ )
(3) How do you calculate the value of $\sigma_{magzp}$?
Answer:
First, you'll need to locate the row coresponding to the standard star in the standard star image dataframe.
(4) What are the V and R band instrumental magnitudes and uncertainties of the standard star?
Answer:
Finally, in the cell below calculate the zeropoints for each band. Before proceeding to 10.4, check your method of calculating the uncertainty on each zeropoint from the errors on the insturmental and calibrated magnitudes. After you've estimated the zeropoints, pause here and check your values with us and other groups.
End of explanation
"""
# Define your function here
def zpcalc(magzp, magzp_err, filtername, dataframe):
"""
Explanation: 10.4 - Calibrate Cluster Photometry using the Zeropoint Offsets
Now that we have the zeropoint values in hand, we can efficiently calibrate all of the fluxes in the cluster dataframe. Write a quick function below that takes in a zeropoint value, its error, and its corresponding filter, then operates on the series in the panda dataframe and adds new columns for the final calibrated magnitude columns and their uncertainties: Vmag, Vmag_err, Rmag, and Rmag_err.
End of explanation
"""
# Apply function to dataframes
zpcalc(magzp_V, magzp_V_error, "V", M52_fluxtable)
zpcalc(magzp_R, magzp_R_error, "R", M52_fluxtable)
# Look at dataframe snippet
M52_fluxtable.head()
"""
Explanation: Apply your function to finalize your M52 data table, and show a portion of the dataframe below:
End of explanation
"""
# Add V-R column and V-R error column
# Finally, save both calibrated dataframes (standard and M52) here as .csv files;
# These can later be read into Excel, Google Sheets, back into pandas, etc. for future use
std_fluxtable.to_csv('SA114637_photometry.csv')
M52_fluxtable.to_csv('M52_photometry.csv')
"""
Explanation: The final step for our photometry table is to calculate a color, namely V-R. Add this quantity as a final column with its error calculated from the relative errors on V and R. Then we will write the table to a file and save it!
End of explanation
"""
# your CMD here:
plt.figure(figsize=(10,10))
"""
Explanation: Congratulations! You have converted your raw FITS datasets into usable science products, and have measured and calibrated the photometry to a magnitude system that is both universally recognized by other astronomers and includes reasonable error estimates.
One thing we have not done is compare our estimated calibrated photometry for M52 with the known literature values -- this is something you will explore shortly.
Let's see what the data look like!
10.5 - Plotting the Color-Magnitude Diagram
To plot data with errorbars, we'll use a slightly different plotting method in matplotlib, called plt.scatter. It can be used as follows:
plt.errorbar(x_data, y_data, xerr = x_errors, yerr = y_errors, marker = 'o', linestyle='None')
In the cell below, plot the V vs. V-R color-magnitude digram for M52, adjusting the axes as necessary, and adding all relevant labels, etc.:
End of explanation
"""
|
hunterherrin/phys202-2015-work
|
assignments/midterm/AlgorithmsEx03.ipynb
|
mit
|
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
"""
Explanation: Algorithms Exercise 3
Imports
End of explanation
"""
o='ahjshd'
list(o)
x,y=letter_prob(list(o))
dict(zip(x,y))
def letter_prob(data):
letter_dictionary={}
for i in data:
if i not in letter_dictionary:
letter_dictionary[i]=1
else:
letter_dictionary[i]=letter_dictionary[i]+1
x=list(letter_dictionary)
y=list(letter_dictionary.values())
for i in range(len(x)):
y[i]=y[i]/(len(data))
return x,y
def char_probs(s):
"""Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
"""
S=list(s)
letter2, prob2 =letter_prob(S)
ans=dict(zip(letter2,prob2))
return ans
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
"""
Explanation: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string:
First do a character count and store the result in a dictionary.
Then divide each character counts by the total number of character to compute the normalized probabilties.
Return the dictionary of characters (keys) and probabilities (values).
End of explanation
"""
entropy({'a': 0.5, 'b': 0.5})
def entropy(d):
"""Compute the entropy of a dict d whose values are probabilities."""
x=np.array(list(char_probs(d)))
x
y=np.array(list(char_probs(d).values()))
y
z=list(zip(x,y))
H=-sum(y*np.log2(y))
return H
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
interact?
"""
Explanation: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as:
$$H = - \Sigma_i P_i \log_2(P_i)$$
In this expression $\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy.
Write a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities.
To compute the entropy, you should:
First convert the values (probabilities) of the dict to a Numpy array of probabilities.
Then use other Numpy functions (np.log2, etc.) to compute the entropy.
Don't use any for or while loops in your code.
End of explanation
"""
d=interact(char_probs,s=(''))
assert True # use this for grading the pi digits histogram
"""
Explanation: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
End of explanation
"""
|
TakayukiSakai/tensorflow
|
tensorflow/examples/udacity/3_regularization.ipynb
|
apache-2.0
|
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
"""
Explanation: Deep Learning
Assignment 3
Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.
The goal of this assignment is to explore regularization techniques.
End of explanation
"""
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
"""
Explanation: First reload the data we generated in notmist.ipynb.
End of explanation
"""
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def acc(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
"""
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
"""
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss + 5e-4 * tf.nn.l2_loss(weights))
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 10001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 1000 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % acc(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % acc(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % acc(test_prediction.eval(), test_labels))
"""
Explanation: Problem 1
Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.
Logistic Regression
End of explanation
"""
batch_size = 128
num_hidden = 2048
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
x = tf.placeholder(tf.float32, shape=(None, image_size * image_size))
y_ = tf.placeholder(tf.float32, shape=(None, num_labels))
keep_prob = tf.placeholder('float')
# Variables. (Input -> Hidden)
w_1 = tf.Variable(
tf.truncated_normal([image_size * image_size, num_hidden]))
b_1 = tf.Variable(tf.zeros([num_hidden]))
# Training computation. (Input -> Hidden)
h_1 = tf.nn.dropout(tf.nn.relu(tf.matmul(x, w_1) + b_1), 0.5)
# Variables. (Hidden -> Output)
w_2 = tf.Variable(
tf.truncated_normal([num_hidden, num_labels]))
b_2 = tf.Variable(tf.zeros([num_labels]))
# Training computation. (Hidden -> Output)
h_2 = tf.matmul(h_1, w_2) + b_2
# Loss
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(h_2, y_))
# Optimizer.
optimizer = tf.train.AdagradOptimizer(1e-3).minimize(loss + 5e-4 * (tf.nn.l2_loss(w_1) + tf.nn.l2_loss(w_2)))
# Predictions for the training, validation, and test data.
prediction = tf.nn.softmax(h_2)
correct_prediction = tf.equal(tf.argmax(prediction,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
num_steps = 30001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {x: batch_data, y_: batch_labels, keep_prob: 0.5}
_, l = session.run(
[optimizer, loss], feed_dict=feed_dict)
if (step % 2000 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Validation accuracy: %.1f%%" % (accuracy.eval(feed_dict={x: valid_dataset, y_: valid_labels, keep_prob: 1.0}) * 100))
print("Test accuracy: %.1f%%" % (accuracy.eval(feed_dict={x: test_dataset, y_: test_labels, keep_prob: 1.0}) * 100))
"""
Explanation: Neural Network (1 hidden layer)
End of explanation
"""
num_steps = 30001
train_data_size = 1000
part_train_dataset = train_dataset[:train_data_size, :]
part_train_labels = train_labels[:train_data_size]
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (part_train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = part_train_dataset[offset:(offset + batch_size), :]
batch_labels = part_train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {x: batch_data, y_: batch_labels, keep_prob: 0.5}
_, l = session.run(
[optimizer, loss], feed_dict=feed_dict)
if (step % 2000 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Validation accuracy: %.1f%%" % (accuracy.eval(feed_dict={x: valid_dataset, y_: valid_labels, keep_prob: 1.0}) * 100))
print("Test accuracy: %.1f%%" % (accuracy.eval(feed_dict={x: test_dataset, y_: test_labels, keep_prob: 1.0}) * 100))
"""
Explanation: Problem 2
Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?
Oviously, the model is overfitted
End of explanation
"""
batch_size = 128
num_hidden_1 = 2048
num_hidden_2 = 1024
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
x = tf.placeholder(tf.float32, shape=(None, image_size * image_size))
y_ = tf.placeholder(tf.float32, shape=(None, num_labels))
keep_prob = tf.placeholder('float')
# Input -> Hidden1
w_1 = tf.Variable(
tf.truncated_normal([image_size * image_size, num_hidden_1]))
b_1 = tf.Variable(tf.zeros([num_hidden_1]))
h_1 = tf.nn.dropout(tf.nn.relu(tf.matmul(x, w_1) + b_1), keep_prob)
# Hidden1 -> Hidden2
w_2 = tf.Variable(
tf.truncated_normal([num_hidden_1, num_hidden_2]))
b_2 = tf.Variable(tf.zeros([num_hidden_2]))
h_2 = tf.nn.dropout(tf.nn.relu(tf.matmul(h_1, w_2) + b_2), keep_prob)
# Hidden2 -> Output
w_3 = tf.Variable(
tf.truncated_normal([num_hidden_2, num_labels]))
b_3 = tf.Variable(tf.zeros([num_labels]))
h_3 = tf.matmul(h_2, w_3) + b_3
# Loss
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(h_3, y_))
# Optimizer.
optimizer = tf.train.AdagradOptimizer(1e-3).minimize(loss + 1e-5 * (tf.nn.l2_loss(w_1) + tf.nn.l2_loss(w_2) + tf.nn.l2_loss(w_3)))
# Predictions for the training, validation, and test data.
prediction = tf.nn.softmax(h_3)
correct_prediction = tf.equal(tf.argmax(prediction,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
num_steps = 30001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {x: batch_data, y_: batch_labels, keep_prob: 0.5}
_, l = session.run(
[optimizer, loss], feed_dict=feed_dict)
if (step % 2000 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Validation accuracy: %.1f%%" % (accuracy.eval(feed_dict={x: valid_dataset, y_: valid_labels, keep_prob: 1.0}) * 100))
print("Test accuracy: %.1f%%" % (accuracy.eval(feed_dict={x: test_dataset, y_: test_labels, keep_prob: 1.0}) * 100))
"""
Explanation: Problem 3
Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.
What happens to our extreme overfitting case?
Problem 4
Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.
One avenue you can explore is to add multiple layers.
Another one is to use learning rate decay:
global_step = tf.Variable(0) # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.5, global_step, ...)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
2 hidden layers (2048, 1024)
dropout on both hidden layers
L2 regularization
End of explanation
"""
|
DTOcean/dtocean-core
|
notebooks/DTOcean Installation Module Example.ipynb
|
gpl-3.0
|
%matplotlib inline
from IPython.display import display, HTML
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (14.0, 8.0)
import numpy as np
from dtocean_core import start_logging
from dtocean_core.core import Core
from dtocean_core.menu import ModuleMenu, ProjectMenu, ThemeMenu
from dtocean_core.pipeline import Tree
def html_list(x):
message = "<ul>"
for name in x:
message += "<li>{}</li>".format(name)
message += "</ul>"
return message
def html_dict(x):
message = "<ul>"
for name, status in x.iteritems():
message += "<li>{}: <b>{}</b></li>".format(name, status)
message += "</ul>"
return message
# Bring up the logger
start_logging()
"""
Explanation: DTOcean Installation Module Example
Note, this example assumes the Installation Module has been installed
End of explanation
"""
new_core = Core()
project_menu = ProjectMenu()
module_menu = ModuleMenu()
theme_menu = ThemeMenu()
pipe_tree = Tree()
"""
Explanation: Create the core, menus and pipeline tree
The core object carrys all the system information and is operated on by the other classes
End of explanation
"""
project_title = "DTOcean"
new_project = project_menu.new_project(new_core, project_title)
"""
Explanation: Create a new project
End of explanation
"""
options_branch = pipe_tree.get_branch(new_core, new_project, "System Type Selection")
variable_id = "device.system_type"
my_var = options_branch.get_input_variable(new_core, new_project, variable_id)
my_var.set_raw_interface(new_core, "Wave Floating")
my_var.read(new_core, new_project)
"""
Explanation: Set the device type
End of explanation
"""
project_menu.initiate_pipeline(new_core, new_project)
"""
Explanation: Initiate the pipeline
This step will be important when the database is incorporated into the system as it will effect the operation of the pipeline.
End of explanation
"""
names = module_menu.get_available(new_core, new_project)
message = html_list(names)
HTML(message)
"""
Explanation: Discover available modules
End of explanation
"""
module_name = 'Installation'
module_menu.activate(new_core, new_project, module_name)
"""
Explanation: Activate a module
Note that the order of activation is important and that we can't deactivate yet!
End of explanation
"""
names = theme_menu.get_available(new_core, new_project)
message = html_list(names)
HTML(message)
theme_menu.activate(new_core, new_project, "Economics")
"""
Explanation: Activate the Economics themes
End of explanation
"""
installation_branch = pipe_tree.get_branch(new_core, new_project, 'Installation')
input_status = installation_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
"""
Explanation: Check the status of the module inputs
End of explanation
"""
project_menu.initiate_dataflow(new_core, new_project)
"""
Explanation: Initiate the dataflow
This indicates that the filtering and module / theme selections are complete
End of explanation
"""
%run test_data/inputs_wp5.py
installation_branch.read_test_data(new_core,
new_project,
"test_data/inputs_wp5.pkl")
"""
Explanation: Load test data
Prepare the test data for loading. The test_data directory of the source code should be copied to the directory that the notebook is running. When the python file is run a pickle file is generated containing a dictionary of inputs.
End of explanation
"""
theme_name = "Economics"
eco_branch = pipe_tree.get_branch(new_core, new_project, "Economics")
input_status = eco_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
"""
Explanation: Prepare the Economics Theme
End of explanation
"""
discount_rate = 0.05
device_cost = 1000000
project_lifetime = 20
new_var = eco_branch.get_input_variable(new_core, new_project,
"project.discount_rate")
new_var.set_raw_interface(new_core, discount_rate)
new_var.read(new_core, new_project)
new_var = eco_branch.get_input_variable(new_core, new_project,
"device.system_cost")
new_var.set_raw_interface(new_core, device_cost)
new_var.read(new_core, new_project)
new_var = eco_branch.get_input_variable(new_core, new_project,
"project.lifetime")
new_var.set_raw_interface(new_core, project_lifetime)
new_var.read(new_core, new_project)
"""
Explanation: Set the discount rate and cost estimates and project lifetime
End of explanation
"""
can_execute = module_menu.is_executable(new_core, new_project, module_name)
display(can_execute)
input_status = installation_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
"""
Explanation: Check if the module can be executed
End of explanation
"""
module_menu.execute_current(new_core, new_project)
"""
Explanation: Execute the current module
The "current" module refers to the next module to be executed in the chain (pipeline) of modules. This command will only execute that module and another will be used for executing all of the modules at once.
Note, any data supplied by the module will be automatically copied into the active data state.
End of explanation
"""
output_status = installation_branch.get_output_status(new_core, new_project)
message = html_dict(output_status)
HTML(message)
economics_data = new_core.get_data_value(new_project, "project.device_phase_installation_costs")
economics_data
economics_data_breakdown = new_core.get_data_value(new_project, "project.device_phase_installation_cost_breakdown")
economics_data_breakdown
port = new_core.get_data_value(new_project, "project.port")
port
comp_cost = new_core.get_data_value(new_project, "project.electrical_phase_installation_costs")
comp_cost
comp_time = new_core.get_data_value(new_project, "project.electrical_phase_installation_times")
comp_time
economics_data_breakdown = new_core.get_data_value(new_project, "project.electrical_phase_installation_time_breakdown")
economics_data_breakdown
comp_cost = new_core.get_data_value(new_project, "project.mooring_phase_installation_costs")
comp_cost
comp_time = new_core.get_data_value(new_project, "project.mooring_phase_installation_times")
comp_time
economics_data_breakdown = new_core.get_data_value(new_project, "project.mooring_phase_installation_time_breakdown")
economics_data_breakdown
device_cost_breakdown = new_core.get_data_value(new_project, "project.device_phase_cost_class_breakdown")
electrical_cost_breakdown = new_core.get_data_value(new_project, "project.electrical_phase_installation_cost_breakdown")
mooring_cost_breakdown = new_core.get_data_value(new_project, "project.mooring_phase_installation_cost_breakdown")
economics_data_breakdown = new_core.get_data_value(new_project, "project.installation_phase_cost_breakdown")
economics_data_breakdown
economics_data_breakdown = new_core.get_data_value(new_project, "project.installation_cost_class_breakdown")
economics_data_breakdown
device_time_breakdown = new_core.get_data_value(new_project, "project.device_phase_time_class_breakdown")
device_time_breakdown
economics_data_breakdown = new_core.get_data_value(new_project, "project.installation_phase_time_breakdown")
economics_data_breakdown
economics_data_breakdown = new_core.get_data_value(new_project, "project.installation_time_class_breakdown")
economics_data_breakdown
economics_data_breakdown = new_core.get_data_value(new_project, "project.installation_economics_data")
economics_data_breakdown
output_status = eco_branch.get_output_status(new_core, new_project)
message = html_dict(output_status)
HTML(message)
economics_data_breakdown = new_core.get_data_value(new_project, "project.capex_breakdown")
economics_data_breakdown
economics_data_breakdown = new_core.get_data_value(new_project, "project.capex_total")
economics_data_breakdown
"""
Explanation: Examine the results
End of explanation
"""
|
evangelistalab/forte
|
tutorials/Tutorial_01.03_forte_sparse.ipynb
|
lgpl-3.0
|
import math
import forte
from IPython.display import display, Math, Latex
def latex(obj):
"""Call the latex() function on an object and display the returned value in LaTeX"""
display(Math(obj.latex()))
"""
Explanation: Forte Tutorial 1.03: Forte's sparse operator class
Forte exposes several functions to create and manipulate general second quantized operators and wave functions.
In this tutorial we will look at simple examples that illustrate how these classes work.
Preliminaries
Here we define a useful function to display equations in LaTeX format
End of explanation
"""
print(forte.det('22+-'))
print(forte.det('22ba'))
print(forte.det('ABBA'))
"""
Explanation: In a previous tutorial we looked at how to define determinants in forte. Here we are going to use the utility function forte.det(), which creates a determinant from a string representation of the determinant. The occupation of each orbital is specified by the symbols 2 (doubly occupied), + (single alpha electron), - (single beta electron), 0 (empty).
Here are some examples.
End of explanation
"""
c = 1./ math.sqrt(2.0)
psi = forte.StateVector({ forte.det('20'): c, forte.det('00') : c})
print(psi)
"""
Explanation: Depending on the size if the Determinant class, these commands will return a 64 bit or longer representation of the determinants.
The StateVector class
Sparse collections of determinants can be manipulated using the StateVector class. The simplest way to create a StateVector object is by passing a dictionary of determinants -> double. For example, here we create a superposition of a determinant with two electrons and one that has no electrons, both with equal coefficients normalized to one
$$
|\Psi\rangle = \frac{1}{\sqrt{2}}\left( |20\rangle + |00\rangle \right)
$$
End of explanation
"""
print(psi.str(2))
"""
Explanation: An alternative way to print this wave function is by calling the str method on the StateVector object. The argument 2 here indicates that we want to show only the occupation numbers of only the first two orbitals.
End of explanation
"""
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[1a+ 0a-]',1.0)
latex(op)
op = forte.SparseOperator(antihermitian=True)
op.add_term_from_str('[0a-]',0.5)
latex(op)
"""
Explanation: The SparseOperator class
The SparseOperator class can handle operators of the form
$$
\hat{O} = \sum_{pqrs\cdots} t_{pq\cdots}^{rs\cdots} \hat{a}^\dagger_p \hat{a}^\dagger_q \cdots \hat{a}_s \hat{a}_r
$$
where each individual term in the summation can be an arbitrary order operator.
However, the amplitudes are assumed to be real numbers.
At creation, the user can specify if this operator should be anti-Hermitian, that is if each term should be paired with minus its Hermitian conjugate
$$
\hat{O} = \sum_{pqrs\cdots} t_{pq\cdots}^{rs\cdots} \left( \hat{a}^\dagger_p \hat{a}^\dagger_q \cdots \hat{a}_s \hat{a}_r
- \hat{a}^\dagger_r \hat{s}^\dagger_q \cdots \hat{a}_q \hat{a}_p \right)
$$
Creating SparseOperator objects
After creation, a SparseOperator object is empty
```python
op = forte.SparseOperator()
latex(op)
displays nothing
```
The simplest way to populate a SparseOperator is by adding one term at a time using the add_term_from_str function.
A generic operator
$$
\hat{q}_1 \hat{q}_2 \cdots, \quad \text{ with } \hat{q}_i \in { \hat{a}_p, \hat{a}^\dagger_p}
$$
can be specified using the following syntax
add_term_from_str('[<orbital_1><spin_1><type_1> <orbital_2><spin_2><type_2> ...]', amplitude)
where
orbital_i: int
spin_i: 'a' (alpha) or 'b' (beta)
type_i: '+' (creation) or '-' (annihilation)
For example, the operator $\hat{a}^\dagger_{1_\alpha} \hat{a}{0\alpha}$ is encoded as [1a+ 0a-]. The following code generates the operators $\hat{a}^\dagger_{1_\alpha} \hat{a}{0\alpha}$ and $\frac{1}{2} (\hat{a}{0\alpha} - \hat{a}^\dagger_{0_\alpha})$
End of explanation
"""
# beta annihilation operators appear to the left of alpha annihilation
# within each group, orbital indices decrease going from left to right
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[1b- 0b- 1a- 0a-]',1.0)
latex(op)
# beta creation operators appear to the right of alpha annihilation
# within each group, orbitals increase going from left to right
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[0a+ 1a+ 0b+ 1b+]',1.0)
latex(op)
# creation operators appear to the left of annihilation operators
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[2a+ 3a+ 2b+ 3b+ 1b- 0b- 1a- 0a-]',1.0)
latex(op)
"""
Explanation: Ordering of operators in the SparseOperator object
<div class="alert alert-block alert-warning">
Note that `add_term_from_str` <b>assumes that the operators will match a specific order!</b>
This canonical order is defined as
$$
(\alpha \text{ creation}) (\beta \text{ creation}) (\beta \text{ annihilation}) (\alpha \text{ annihilation})
$$
with the creation (annihilation) operators ordered within each group in increasing (decreasing) order.
The following operator satisfies the canonical order:
$$
+\;\hat{a}_{2 \alpha}^\dagger\hat{a}_{3 \alpha}^\dagger\hat{a}_{2 \beta}^\dagger\hat{a}_{3 \beta}^\dagger\hat{a}_{1 \beta}\hat{a}_{0 \beta}\hat{a}_{1 \alpha}\hat{a}_{0 \alpha}
$$
</div>
If you want to work with operators that do not follow this ordering, for example, $\hat{a}{1 \alpha}\hat{a}^\dagger{0 \alpha}$, you will need to work out an equivalent representation, for example, $\hat{a}{0 \alpha}\hat{a}^\dagger{0 \alpha} = 1 - \hat{a}^\dagger_{0 \alpha}\hat{a}_{0 \alpha}$.
These examples illustrate valid operators in canonical order
End of explanation
"""
op = forte.SparseOperator(antihermitian=False)
try:
op.add_term_from_str('[0b- 1b- 1a- 0a-]',1.0)
except Exception as e:
print(f'RuntimeError: {e}')
"""
Explanation: When the operator passed is out of order, an exception is thrown. For example, the following code
python
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[0b- 1b- 1a- 0a-]',1.0)
latex(op)
leads to the following RuntimeError
End of explanation
"""
# the operators [0a- 0b- 1a- 1b-] are reordered and the final sign is -1.
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[0a- 0b- 1a- 1b-]',1.0,allow_reordering=True)
latex(op)
# the operators [0a- 0b- 1a- 1b-] are reordered and the final sign is -1.
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[0a- 0b- 1a- 1b-]',1.0,allow_reordering=True)
latex(op)
# The operator [0a- 0b- 1a- 1b-] (see above) is equivalent to -[1a- 1b- 0b- 0a-].
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[1a- 1b- 0b- 0a-]',-1.0,allow_reordering=True)
latex(op)
# Another example that illustrates the reordering of operators
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[0a- 0b- 1a- 1b- 2a+ 2b+ 3a+ 3b+]',1.0,allow_reordering=True)
latex(op)
"""
Explanation: This error can be overriden. However, this is recommended only if you understand what happens when you do so. The function add_term_from_str has an extra option that allows it to reorder the operators to the canonical form. The final operator is multiplied by a sign factor that corresponds to the parity of the permutation that connects the initial and final ordering. This code illustrates how this reordering happens
End of explanation
"""
op = forte.SparseOperator(antihermitian=False)
try:
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[0b- 0b-]',1.0)
except Exception as e:
print(f'RuntimeError: {e}')
"""
Explanation: An exception is also thrown if two operators are repeated. For example, the following code
python
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[0b- 0b-]',1.0)
gives to the following RuntimeError
End of explanation
"""
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[1a+ 0a-]',0.3)
op.add_term_from_str('[1b+ 0b-]',0.3)
op.add_term_from_str('[1a+ 1b+ 0b- 0a-]',0.1)
latex(op)
"""
Explanation: Specifying a full operator with the SparseOperator class
To form a full operator we can just keep adding terms to a SparseOperator object. For example
End of explanation
"""
op = forte.SparseOperator()
op.add_term([(False,True,0),(True,True,1)],1.0)
op.str()
latex(op)
"""
Explanation: Another way to create an operator is via the function add_term by providing a list of tuples that specifies the second quantized operators and the corresponding amplitude. This is useful when building operators with a large number of terms. Note, that this function uses a different convention than add_term_from_str for expressing the ordering of the operators. Here we specify the operator (in reversed order)
$$
\cdots \hat{q}2 \hat{q}_1, \quad \text{ with } \hat{q}_i \in { \hat{a}_p, \hat{a}^\dagger_p}
$$
with the following syntax
add_term([(type_1, spin_1, orb_1), (type_2, spin_2, orb_2), ...]', amplitude)
where
type_i: bool (true = creation, false = annihilation)
spin_i: bool (true = alpha, false = beta)
orb_i: int
For example, the operator $\hat{a}^\dagger{1_\alpha}\hat{a}{0\alpha}$ is generated in this way
End of explanation
"""
op = forte.SparseOperator(antihermitian=False)
op.add_term_from_str('[]',0.1)
op.add_term_from_str('[1a+ 0a-]',0.3)
op.add_term_from_str('[1b+ 0b-]',0.3)
psi = forte.StateVector({ forte.det('2'): 1.0})
new_psi = forte.apply_operator(op,psi)
print(new_psi.str(3))
"""
Explanation: Applying a SparseOperator to a StateVector
To apply an operator to a state vector you can use the forte.apply_operator(op,psi) function. This function takes an operator (op) and a state (psi), and returns the state |new_psi> = op |psi>. For example, the following creates a CIS wave function using the operator
$$
\hat{T} = 0.1\; +0.3 \left(\hat{a}{1 \alpha}^\dagger\hat{a}{0 \alpha} + \hat{a}{1 \beta}^\dagger\hat{a}{0 \beta} \right)
$$
where the first term is just a scalar
End of explanation
"""
psi = forte.StateVector({ forte.det('2'): 1.0})
exp_op = forte.SparseExp()
new_psi = exp_op.compute(op,psi)
print(new_psi.str(3))
"""
Explanation: Exponential operator
To apply the exponential operator $\exp(\hat{T})$ we can use the class SparseExp class. This class provides the method compute which takes as arguments the operator and the state
End of explanation
"""
new_psi2 = exp_op.compute(op,new_psi,scaling_factor=-1.0)
print(new_psi2.str(3))
"""
Explanation: There are several variables that control the behavior of compute. For example, to compute the inverse, we can just apply $\exp(-\hat{T})$
End of explanation
"""
psi = forte.StateVector({ forte.det('2'): 1.0})
new_psi = exp_op.compute(op,psi,algorithm='onthefly')
print(new_psi.str(3))
"""
Explanation: By default compute uses a caching algorithm that reuses information from previous applications of the exponential. A memory-light algorithm can be also invoked
End of explanation
"""
psi = forte.StateVector({ forte.det('2'): 1.0})
new_psi = exp_op.compute(op,psi,algorithm='onthefly',maxk=1)
print(new_psi.str(3))
"""
Explanation: We can also control other parameters, like the order of the Taylor expansion used to approximate $\exp(\hat{T})$ (maxk) and a threshold used to screen term (screen_thresh). For example, to apply $1 + \hat{T}$ we can call
End of explanation
"""
op = forte.SparseOperator(antihermitian=True)
op.add_term_from_str('[1a+ 0a-]',0.3)
op.add_term_from_str('[1b+ 0b-]',0.3)
psi = forte.StateVector({ forte.det('2'): 1.0})
factexp_op = forte.SparseFactExp()
new_psi = factexp_op.compute(op,psi)
print(new_psi.str(3))
"""
Explanation: Note that the most efficient algorithm to compute the exponential of an operator via SparseExp
assumes that the function is always called with the same operator.
For example, if op1 and op2 are two different SparseOperator obects, the following code will give
an incorrect result
python
exp_op = forte.SparseExp()
psi1 = exp_op.compute(op1,psi0)
psi2 = exp_op.compute(op2,psi1)
However, if we ask the SparseExp class to use an on-the-fly algorithm via the following code
python
exp_op = forte.SparseExp()
psi1 = exp_op.compute(op1,psi0,algorithm='onthefly')
psi2 = exp_op.compute(op2,psi1,algorithm='onthefly')
then the result will be correct.
Factorized exponential of an anti-Hermitian operator
Another useful operator is the factorized exponential of an operator $\hat{T}$. If $\hat{T}$ is a sum of operators
$$
\hat{T} = \sum_\mu t_\mu \hat{\kappa}\mu
$$
the factorized exponential is defined as
$$
\exp\mathrm{f}(\hat{T}) = \prod_\mu \exp(t_\mu \hat{\kappa}_\mu)
$$
This operation is implemented in the class SparseFactExp for the case of anti-Hermitian operators, that is, when $(\hat{T})^\dagger = - \hat{T}$.
This class provides the method compute which takes as arguments the operator and the state. Here is a simple example:
End of explanation
"""
starting_psi = factexp_op.compute(op,new_psi,inverse=True)
print(starting_psi.str(3))
"""
Explanation: To compute the inverse of the factorized exponential, just pass the option inverse=True to compute():
End of explanation
"""
|
utds/workshops
|
workshop_1/Speed_Dating_EDA.ipynb
|
mit
|
#First let's import the necessary modules
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
from IPython.display import display, HTML
pd.set_option('display.max_columns', 500)
#Specifying the Data Path
cwd = os.getcwd()
file_path = os.path.join(cwd, 'cleaned_speed_dating.csv')
"""
Explanation: Speed Dating Data Exploratory Data Analysis
End of explanation
"""
#Reading the CSV, as easy as this! df stands for dataframe
df=pd.read_csv(file_path)
df.head()
df.info()
display(df.describe())
"""
Explanation: Here, we explore some basic dataset details.
End of explanation
"""
corr = df.corr()
display(corr)
corr_tmp = corr.copy()
shape = corr_tmp.values.shape
#Setting upper triangular matrix including diagonal (std) to -99, discounting them
bad_indices = np.triu_indices(shape[0])
corr_tmp.values[bad_indices] = -99
arg_corr = np.dstack(np.unravel_index(np.argsort(-corr_tmp.values.ravel()), shape))[0]
arg_corr = arg_corr[:-len(bad_indices[0])]
print('Highest Correlation Coefficient\n')
for pair in corr_tmp.columns[arg_corr]:
print(pair, ' Correlation = ', corr[pair[0]][pair[1]])
print('Lowest Correlation Coefficient\n')
"""
Explanation: Correlation
We calculate the correlation matrix of the dataframe.
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
def plotTwoFeatures(f1, f2):
fig = plt.figure()
plt.plot(df[f1], df[f2], 'o')
plt.xlabel(f1)
plt.ylabel(f2)
#Hard to see for first two, these are only integers
plotTwoFeatures('art', 'museums')
plotTwoFeatures('intel_partner', 'sinc_partner')
plotTwoFeatures('amb_want', 'attr_want')
# Examine the proportion of acceptance by gender
female_df = df.loc[df['gender'] == 0]
male_df = df.loc[df['gender'] == 1]
yes_female_df = female_df.query('dec == 1')
no_female_df = female_df.query('dec == 0')
print('Proportion of acceptance by females is {}'.format(float(yes_female_df.shape[0])/float(female_df.shape[0])))
yes_male_df =male_df.query('dec == 1')
no_male_df = male_df.query('dec == 0')
print('Proportion of acceptance by males is {}'.format(float(yes_male_df.shape[0])/float(male_df.shape[0])))
def plot_feature(feature):
fig = plt.figure(figsize=(8, 8))
fig.subplots_adjust(hspace=0.5)
ax = plt.subplot(211)
ax.set_title('Female')
ax.set_xlabel(feature)
x = no_female_df[feature].values
y = yes_female_df[feature].values
plt.hist([x, y], label=['No','Yes'], bins=11)
ax.legend()
ax = plt.subplot(212)
ax.set_title('Male')
ax.set_xlabel(feature)
x = no_male_df[feature].values
y = yes_male_df[feature].values
plt.hist([x, y], label=['No','Yes'], bins=11)
ax.legend()
plt.show()
#How do the distributions of certain features look like for 'yes' decision and 'no' decision
plot_feature('attr_partner')
plot_feature('intel_partner')
plot_feature('shar_partner')
"""
Explanation: Look at the correlations above. Are there any results that you weren't expecting?
Data Visualization
End of explanation
"""
|
rgerkin/sciunit
|
docs/chapter2.ipynb
|
mit
|
import sciunit
"""
Explanation: SciUnit is a framework for validating scientific models by creating experimental-data-driven unit tests.
Chapter 2. Writing a model and test in SciUnit from scratch
(or back to Chapter 1)
End of explanation
"""
class ProducesNumber(sciunit.Capability):
"""An example capability for producing some generic number."""
def produce_number(self):
"""The implementation of this method should return a number."""
raise NotImplementedError("Must implement produce_number.")
"""
Explanation: SciUnit works by making models declare and implement capabilities that tests use to interact with those models.
Each capability is a subclass of sciunit.Capability, and contains one or more unimplemented methods. Here we define a simple capability through which a model can return a single number.
End of explanation
"""
from sciunit.capabilities import ProducesNumber # One of many potential model capabilities.
class ConstModel(sciunit.Model,
ProducesNumber):
"""A model that always produces a constant number as output."""
def __init__(self, constant, name=None):
self.constant = constant
super(ConstModel, self).__init__(name=name, constant=constant)
def produce_number(self):
return self.constant
"""
Explanation: SciUnit models subclass sciunit.Model as well as each sciunit.Capability they aim to implement.
Here we create a trivial model class that is instantiated with a single constant.
End of explanation
"""
const_model_37 = ConstModel(37, name="Constant Model 37")
"""
Explanation: A model we want to test is always an instance (with specific model arguments) of a more generic model class.
Here we create an instance of ConstModel that will always produce the number 37 and give it a name.
End of explanation
"""
from sciunit.scores import BooleanScore # One of several SciUnit score types.
class EqualsTest(sciunit.Test):
"""Tests if the model predicts
the same number as the observation."""
required_capabilities = (ProducesNumber,) # The one capability required for a model to take this test.
score_type = BooleanScore # This test's 'judge' method will return a BooleanScore.
def generate_prediction(self, model):
return model.produce_number() # The model has this method if it inherits from the 'ProducesNumber' capability.
def compute_score(self, observation, prediction):
score = self.score_type(observation == prediction) # Returns a BooleanScore.
score.description = 'Passing score if the prediction equals the observation'
return score
"""
Explanation: A SciUnit test class must contain:
the capabilities a model requires to take the test.
the type of score that it will return
an implementation of generate_prediction, which will use the model's capabilities to get some values out of the model.
an implementaiton of compute_score, to use the provided observation and the generated prediction to compute a sciunit Score.
End of explanation
"""
equals_37_test = EqualsTest(37, name='=37')
"""
Explanation: A SciUnit test is a specific instance of a test class, parameterized by the observation (i.e. the empirical data that the model aims to recapitulate).
Here we create a test instance parameterized by the observation 37.0.
End of explanation
"""
score = equals_37_test.judge(const_model_37)
"""
Explanation: Every test has a judge method which executes the test and returns a score for the provide model.
Here we judge the model we just created using the test we just created. The judge method does a lot of things behind the scenes:
1. It checks to makes sure that your model expresses each capability required to take the test. It doesn't check to see if they are implemented correctly (how could it know?) but it does check to make sure the model at least claims (through inheritance) to express each capability. The required capabilities are none other than those in the test's required_capabilities attribute. Since ProducesNumber is the only required capability, and the ConstModel class inherits from the corresponding capability class, that check passes.
2. It calls the test's generate_prediction method, which uses the model's capabilities to make the model return some quantity of interest, in this case a characteristic number.
3. It calls the test's compute_score method, which compares the observation the test was instantiated with against the prediction returned in the previous step. This comparison of quantities is cast into a score (in this case, a BooleanScore), bound to some model output of interest (in this case, the number produces by the model), and that score object is returned.
4. The score returned is checked to make sure it is of the type promised in the class definition, i.e. that a BooleanScore is returned if a BooleanScore is listed in the score_type attribute of the test.
5. The score is bound to the test that returned it, the model that took the test, and the prediction and observation that were used to compute it.
End of explanation
"""
score
"""
Explanation: A score is an object containing information about the result of the test, and the provenance of that result.
Printing the score just prints a representation of its value (for a BooleanScore, True has the representation 'Pass')
End of explanation
"""
score.summarize()
"""
Explanation: We can also summarize the score in its entirety, printing information about the associated model and test.
End of explanation
"""
score.describe()
"""
Explanation: How was that score computed again?
End of explanation
"""
equals_1_test = EqualsTest(1, name='=1') # Test that model output equals 1.
equals_2_test = EqualsTest(2, name='=2') # Test that model output equals 2.
equals_suite = sciunit.TestSuite("Equals test suite", [equals_1_test, equals_2_test, equals_37_test])
"""
Explanation: Several logically related tests can be grouped using a TestSuite.
These can be instances of the same test class (instantiated with different observations) or instances of different test classes. Anything tests that you think belongs together can be part of a TestSuite. A test can be a part of many different suites at once.
End of explanation
"""
score_matrix = equals_suite.judge(const_model_37)
score_matrix.view()
"""
Explanation: Now we can test our model using this TestSuite, and display the results.
End of explanation
"""
const_model_1 = ConstModel(1, name='Constant Model 1')
const_model_2 = ConstModel(2, name='Constant Model 2')
score_matrix = equals_suite.judge([const_model_1, const_model_2, const_model_37])
score_matrix.view()
"""
Explanation: We can create more models and subject those to the test suite to get a more extensive score matrix.
End of explanation
"""
score_matrix[equals_1_test].view()
"""
Explanation: We can also examine the results only for one of the tests in the suite.
End of explanation
"""
score_matrix[const_model_2].view()
"""
Explanation: Or examine the results only for one of the models.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.22/_downloads/f5772cd483591ac49331a1b66e9b292b/plot_fix_bem_in_blender.ipynb
|
bsd-3-clause
|
# Authors: Marijn van Vliet <w.m.vanvliet@gmail.com>
# Ezequiel Mikulan <e.mikulan@gmail.com>
#
# License: BSD (3-clause)
import os
import os.path as op
import shutil
import mne
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, 'sample', 'bem')
"""
Explanation: Editing BEM surfaces in Blender
Sometimes when creating a BEM model the surfaces need manual correction because
of a series of problems that can arise (e.g. intersection between surfaces).
Here, we will see how this can be achieved by exporting the surfaces to the 3D
modeling program Blender <https://blender.org>_, editing them, and
re-importing them.
This tutorial is based on https://github.com/ezemikulan/blender_freesurfer by
Ezequiel Mikulan.
End of explanation
"""
# Put the converted surfaces in a separate 'conv' folder
conv_dir = op.join(subjects_dir, 'sample', 'conv')
os.makedirs(conv_dir, exist_ok=True)
# Load the inner skull surface and create a problem
coords, faces = mne.read_surface(op.join(bem_dir, 'inner_skull.surf'))
coords[0] *= 1.1 # Move the first vertex outside the skull
# Write the inner skull surface as an .obj file that can be imported by
# Blender.
mne.write_surface(op.join(conv_dir, 'inner_skull.obj'), coords, faces,
overwrite=True)
# Also convert the outer skull surface.
coords, faces = mne.read_surface(op.join(bem_dir, 'outer_skull.surf'))
mne.write_surface(op.join(conv_dir, 'outer_skull.obj'), coords, faces,
overwrite=True)
"""
Explanation: Exporting surfaces to Blender
In this tutorial, we are working with the MNE-Sample set, for which the
surfaces have no issues. To demonstrate how to fix problematic surfaces, we
are going to manually place one of the inner-skull vertices outside the
outer-skill mesh.
We then convert the surfaces to .obj
<https://en.wikipedia.org/wiki/Wavefront_.obj_file>_ files and create a new
folder called conv inside the FreeSurfer subject folder to keep them in.
End of explanation
"""
coords, faces = mne.read_surface(op.join(conv_dir, 'inner_skull.obj'))
coords[0] /= 1.1 # Move the first vertex back inside the skull
mne.write_surface(op.join(conv_dir, 'inner_skull_fixed.obj'), coords, faces,
overwrite=True)
"""
Explanation: Editing in Blender
We can now open Blender and import the surfaces. Go to File > Import >
Wavefront (.obj). Navigate to the conv folder and select the file you
want to import. Make sure to select the Keep Vert Order option. You can
also select the Y Forward option to load the axes in the correct direction
(RAS):
<img src="file://../../_static/blender_import_obj/blender_import_obj1.jpg" width="800" alt="Importing .obj files in Blender">
For convenience, you can save these settings by pressing the + button
next to Operator Presets.
Repeat the procedure for all surfaces you want to import (e.g. inner_skull
and outer_skull).
You can now edit the surfaces any way you like. See the
Beginner Blender Tutorial Series
<https://www.youtube.com/playlist?list=PLxLGgWrla12dEW5mjO09kR2_TzPqDTXdw>
to learn how to use Blender. Specifically, part 2
<http://www.youtube.com/watch?v=RaT-uG5wgUw&t=5m30s> will teach you how to
use the basic editing tools you need to fix the surface.
<img src="file://../../_static/blender_import_obj/blender_import_obj2.jpg" width="800" alt="Editing surfaces in Blender">
Using the fixed surfaces in MNE-Python
In Blender, you can export a surface as an .obj file by selecting it and go
to File > Export > Wavefront (.obj). You need to again select the Y
Forward option and check the Keep Vertex Order box.
<img src="file://../../_static/blender_import_obj/blender_import_obj3.jpg" width="200" alt="Exporting .obj files in Blender">
Each surface needs to be exported as a separate file. We recommend saving
them in the conv folder and ending the file name with _fixed.obj,
although this is not strictly necessary.
In order to be able to run this tutorial script top to bottom, we here
simulate the edits you did manually in Blender using Python code:
End of explanation
"""
# Read the fixed surface
coords, faces = mne.read_surface(op.join(conv_dir, 'inner_skull_fixed.obj'))
# Backup the original surface
shutil.copy(op.join(bem_dir, 'inner_skull.surf'),
op.join(bem_dir, 'inner_skull_orig.surf'))
# Overwrite the original surface with the fixed version
mne.write_surface(op.join(bem_dir, 'inner_skull.surf'), coords, faces,
overwrite=True)
"""
Explanation: Back in Python, you can read the fixed .obj files and save them as
FreeSurfer .surf files. For the :func:mne.make_bem_model function to find
them, they need to be saved using their original names in the surf
folder, e.g. surf/inner_skull.surf. Be sure to first backup the original
surfaces in case you make a mistake!
End of explanation
"""
|
klavinslab/coral
|
docs/tutorial/sequences.ipynb
|
mit
|
import coral as cor
"""
Explanation: Sequences
sequence.DNA
coral.DNA is the core data structure of coral. If you are already familiar with core python data structures, it mostly acts like a container similar to lists or strings, but also provides further object-oriented methods for DNA-specific tasks, like reverse complementation. Most design functions in coral return a coral.DNA object or something that contains a coral.DNA object (like coral.Primer). In addition, there are related coral.RNA and coral.Peptide objects for representing RNA and peptide sequences and methods for converting between them.
To get started with coral.DNA, import coral:
End of explanation
"""
example_dna = cor.DNA('atgagtaaaggagaagaacttttcactgga')
display(example_dna)
"""
Explanation: Your first sequence
Let's jump right into things. Let's make a sequence that's the first 30 bases of gfp from A. victoria. To initialize a sequence, you feed it a string of DNA characters.
End of explanation
"""
# Extract the first three bases
display(example_dna[0:3])
# Extract the last seven bases
display(example_dna[-7:])
# Reverse a sequence
display(example_dna[::-1])
# Grab every other base starting at index 0
display(example_dna[::2])
# Is the sequence 'AT' in our sequence? How about 'AC'?
print "'AT' is in our sequence: {}.".format("AT" in example_dna)
print "'ATT' is in our sequence: {}.".format("ATT" in example_dna)
"""
Explanation: A few things just happened behind the scenes. First, the input was checked to make sure it's DNA (A, T, G, and C). For now, it supports only unambiguous letters - no N, Y, R, etc. Second, the internal representation is converted to an uppercase string - this way, DNA is displayed uniformly and functional elements (like annealing and overhang regions of primers) can be delineated using case. If you input a non-DNA sequence, a ValueError is raised.
For the most part, a sequence.DNA instance acts like a python container and many string-like operations work.
End of explanation
"""
example_dna.reverse_complement()
"""
Explanation: Several other common special methods and operators are defined for sequences - you can concatenate DNA (so long as it isn't circular) using +, repeat linear sequences using * with an integer, check for equality with == and != (note: features, not just sequences, must be identical), check the length with len(dna_object), etc.
Simple sequences - methods
In addition to slicing, sequence.DNA provides methods for common molecular manipulations. For example, reverse complementing a sequence is a single call:
End of explanation
"""
example_dna.copy()
# Incorrect way (editing shared + mutable sequence):
example_dna = cor.DNA('atgagtaaaggagaagaacttttcactgga')
variant_list = []
for i, base in enumerate(example_dna):
variant = example_dna
variant.top[i] = 'A'
variant.bottom[i] = 'T'
variant_list.append(variant)
print [str(x) for x in variant_list]
print
# Correct way (copy mutable sequence, then edit):
example_dna = cor.DNA('atgagtaaaggagaagaacttttcactgga')
variant_list = []
for i, base in enumerate(example_dna):
variant = example_dna.copy()
variant.top[i] = 'A'
variant.bottom[i] = 'T'
variant_list.append(variant)
print [str(x) for x in variant_list]
"""
Explanation: An extremely important method is the .copy() method. It may seem redundant to have an entire function for copying a sequence - why not just assign a sequence.DNA object to a new variable? As in most high-level languages, python does not actually copy entire objects in memory when assignment happens - it just adds another reference to the same data. The short of it is that the very common operation of generating a lot of new variants to a sequence, or copying a sequence, requires the use of a .copy() method. For example, if you want to generate a new list of variants where an 'a' is substituted one at a time at each part of the sequence, using .copy() returns the correct result (the first example) while directly accessing example_dna has horrible consequences (the edits build up, as they all modify the same piece of data sequentially):
End of explanation
"""
revcomp_dna = example_dna.reverse_complement()
display(example_dna)
display(revcomp_dna)
"""
Explanation: An important fact about sequence.DNA methods and slicing is that none of the operations modify the object directly (they don't mutate their parent) - if we look at example_dna, it has not been reverse-complemented itself. Running example_dna.reverse_complement() outputs a new sequence, so if you want to save your chance you need to assign a variable:
End of explanation
"""
# The top strand - a simple python string in the 5' -> 3' orientation.
example_dna.top
# The bottom strand - another python string, also in the 5' -> 3' orientation.
example_dna.bottom
# Sequences are double stranded, or 'ds' by default.
# This is a directly accessible attribute, not a method, so () is not required.
example_dna.ds
# DNA can be linear or circular - check the boolean `circular` attribute.
example_dna.circular
# You can switch between topologies using the .circularize and .linearize methods.
# Circular DNA has different properties:
# 1) it can't be concatenated to
# 2) sequence searches using .locate will search over the current origin (e.g. from -10 to +10 for a 20-base sequence).
circular_dna = example_dna.circularize()
circular_dna.circular
# Linearization is more complex - you can choose the index at which to linearize a circular sequence.
# This simulates a precise double stranded break at the index of your choosing.
# The following example shows the difference between linearizing at index 0 (default) versus index 2
# (python 0-indexes, so index 2 = 3rd base, i.e. 'g' in 'atg')
print circular_dna.linearize()
print
print circular_dna.linearize(2)
# Sometimes you just want to rotate the sequence around - i.e. switch the top and bottom strands.
# For this, use the .flip() method
example_dna.flip()
"""
Explanation: You also have direct access important attributes of a sequence.DNA object. The following are examples of how to get important sequences or information about a sequence.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/text_classification/solutions/rnn_encoder_decoder.ipynb
|
apache-2.0
|
pip freeze | grep nltk || pip install nltk
import os
import pickle
import sys
import nltk
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras.layers import (
Dense,
Embedding,
GRU,
Input,
)
from tensorflow.keras.models import (
load_model,
Model,
)
import utils_preproc
print(tf.__version__)
SEED = 0
MODEL_PATH = 'translate_models/baseline'
DATA_URL = 'http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip'
LOAD_CHECKPOINT = False
tf.random.set_seed(SEED)
"""
Explanation: Simple RNN Encode-Decoder for Translation
Learning Objectives
1. Learn how to create a tf.data.Dataset for seq2seq problems
1. Learn how to train an encoder-decoder model in Keras
1. Learn how to save the encoder and the decoder as separate models
1. Learn how to piece together the trained encoder and decoder into a translation function
1. Learn how to use the BLUE score to evaluate a translation model
Introduction
In this lab we'll build a translation model from Spanish to English using a RNN encoder-decoder model architecture.
We will start by creating train and eval datasets (using the tf.data.Dataset API) that are typical for seq2seq problems. Then we will use the Keras functional API to train an RNN encoder-decoder model, which will save as two separate models, the encoder and decoder model. Using these two separate pieces we will implement the translation function.
At last, we'll benchmark our results using the industry standard BLEU score.
End of explanation
"""
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin=DATA_URL, extract=True)
path_to_file = os.path.join(
os.path.dirname(path_to_zip),
"spa-eng/spa.txt"
)
print("Translation data stored at:", path_to_file)
data = pd.read_csv(
path_to_file, sep='\t', header=None, names=['english', 'spanish'])
data.sample(3)
"""
Explanation: Downloading the Data
We'll use a language dataset provided by http://www.manythings.org/anki/. The dataset contains Spanish-English translation pairs in the format:
May I borrow this book? ¿Puedo tomar prestado este libro?
The dataset is a curated list of 120K translation pairs from http://tatoeba.org/, a platform for community contributed translations by native speakers.
End of explanation
"""
raw = [
"No estamos comiendo.",
"Está llegando el invierno.",
"El invierno se acerca.",
"Tom no comio nada.",
"Su pierna mala le impidió ganar la carrera.",
"Su respuesta es erronea.",
"¿Qué tal si damos un paseo después del almuerzo?"
]
processed = [utils_preproc.preprocess_sentence(s) for s in raw]
processed
"""
Explanation: From the utils_preproc package we have written for you,
we will use the following functions to pre-process our dataset of sentence pairs.
Sentence Preprocessing
The utils_preproc.preprocess_sentence() method does the following:
1. Converts sentence to lower case
2. Adds a space between punctuation and words
3. Replaces tokens that aren't a-z or punctuation with space
4. Adds <start> and <end> tokens
For example:
End of explanation
"""
integerized, tokenizer = utils_preproc.tokenize(processed)
integerized
"""
Explanation: Sentence Integerizing
The utils_preproc.tokenize() method does the following:
Splits each sentence into a token list
Maps each token to an integer
Pads to length of longest sentence
It returns an instance of a Keras Tokenizer
containing the token-integer mapping along with the integerized sentences:
End of explanation
"""
tokenizer.sequences_to_texts(integerized)
"""
Explanation: The outputted tokenizer can be used to get back the actual works
from the integers representing them:
End of explanation
"""
def load_and_preprocess(path, num_examples):
with open(path_to_file, 'r') as fp:
lines = fp.read().strip().split('\n')
# TODO 1a
sentence_pairs = [
[utils_preproc.preprocess_sentence(sent) for sent in line.split('\t')]
for line in lines[:num_examples]
]
return zip(*sentence_pairs)
en, sp = load_and_preprocess(path_to_file, num_examples=10)
print(en[-1])
print(sp[-1])
"""
Explanation: Creating the tf.data.Dataset
load_and_preprocess
Let's first implement a function that will read the raw sentence-pair file
and preprocess the sentences with utils_preproc.preprocess_sentence.
The load_and_preprocess function takes as input
- the path where the sentence-pair file is located
- the number of examples one wants to read in
It returns a tuple whose first component contains the english
preprocessed sentences, while the second component contains the
spanish ones:
End of explanation
"""
def load_and_integerize(path, num_examples=None):
targ_lang, inp_lang = load_and_preprocess(path, num_examples)
# TODO 1b
input_tensor, inp_lang_tokenizer = utils_preproc.tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = utils_preproc.tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
"""
Explanation: load_and_integerize
Using utils_preproc.tokenize, let us now implement the function load_and_integerize that takes as input the data path along with the number of examples we want to read in and returns the following tuple:
python
(input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer)
where
input_tensor is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
target_tensor is an integer tensor of shape (num_examples, max_length_targ) containing the integerized versions of the target language sentences
inp_lang_tokenizer is the source language tokenizer
targ_lang_tokenizer is the target language tokenizer
End of explanation
"""
TEST_PROP = 0.2
NUM_EXAMPLES = 30000
"""
Explanation: Train and eval splits
We'll split this data 80/20 into train and validation, and we'll use only the first 30K examples, since we'll be training on a single GPU.
Let us set variable for that:
End of explanation
"""
input_tensor, target_tensor, inp_lang, targ_lang = load_and_integerize(
path_to_file, NUM_EXAMPLES)
"""
Explanation: Now let's load and integerize the sentence paris and store the tokenizer for the source and the target language into the int_lang and targ_lang variable respectively:
End of explanation
"""
max_length_targ = target_tensor.shape[1]
max_length_inp = input_tensor.shape[1]
"""
Explanation: Let us store the maximal sentence length of both languages into two variables:
End of explanation
"""
splits = train_test_split(
input_tensor, target_tensor, test_size=TEST_PROP, random_state=SEED)
input_tensor_train = splits[0]
input_tensor_val = splits[1]
target_tensor_train = splits[2]
target_tensor_val = splits[3]
"""
Explanation: We are now using scikit-learn train_test_split to create our splits:
End of explanation
"""
(len(input_tensor_train), len(target_tensor_train),
len(input_tensor_val), len(target_tensor_val))
"""
Explanation: Let's make sure the number of example in each split looks good:
End of explanation
"""
print("Input Language; int to word mapping")
print(input_tensor_train[0])
print(utils_preproc.int2word(inp_lang, input_tensor_train[0]), '\n')
print("Target Language; int to word mapping")
print(target_tensor_train[0])
print(utils_preproc.int2word(targ_lang, target_tensor_train[0]))
"""
Explanation: The utils_preproc.int2word function allows you to transform back the integerized sentences into words. Note that the <start> token is alwasy encoded as 1, while the <end> token is always encoded as 0:
End of explanation
"""
def create_dataset(encoder_input, decoder_input):
# TODO 1c
# shift ahead by 1
target = tf.roll(decoder_input, -1, 1)
# replace last column with 0s
zeros = tf.zeros([target.shape[0], 1], dtype=tf.int32)
target = tf.concat((target[:, :-1], zeros), axis=-1)
dataset = tf.data.Dataset.from_tensor_slices(
((encoder_input, decoder_input), target))
return dataset
"""
Explanation: Create tf.data dataset for train and eval
Below we implement the create_dataset function that takes as input
* encoder_input which is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
* decoder_input which is an integer tensor of shape (num_examples, max_length_targ)containing the integerized versions of the target language sentences
It returns a tf.data.Dataset containing examples for the form
python
((source_sentence, target_sentence), shifted_target_sentence)
where source_sentence and target_setence are the integer version of source-target language pairs and shifted_target is the same as target_sentence but with indices shifted by 1.
Remark: In the training code, source_sentence (resp. target_sentence) will be fed as the encoder (resp. decoder) input, while shifted_target will be used to compute the cross-entropy loss by comparing the decoder output with the shifted target sentences.
End of explanation
"""
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
train_dataset = create_dataset(
input_tensor_train, target_tensor_train).shuffle(
BUFFER_SIZE).repeat().batch(BATCH_SIZE, drop_remainder=True)
eval_dataset = create_dataset(
input_tensor_val, target_tensor_val).batch(
BATCH_SIZE, drop_remainder=True)
"""
Explanation: Let's now create the actual train and eval dataset using the function above:
End of explanation
"""
EMBEDDING_DIM = 256
HIDDEN_UNITS = 1024
INPUT_VOCAB_SIZE = len(inp_lang.word_index) + 1
TARGET_VOCAB_SIZE = len(targ_lang.word_index) + 1
"""
Explanation: Training the RNN encoder-decoder model
We use an encoder-decoder architecture, however we embed our words into a latent space prior to feeding them into the RNN.
End of explanation
"""
encoder_inputs = Input(shape=(None,), name="encoder_input")
# TODO 2a
encoder_inputs_embedded = Embedding(
input_dim=INPUT_VOCAB_SIZE,
output_dim=EMBEDDING_DIM,
input_length=max_length_inp)(encoder_inputs)
encoder_rnn = GRU(
units=HIDDEN_UNITS,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
encoder_outputs, encoder_state = encoder_rnn(encoder_inputs_embedded)
"""
Explanation: Let's implement the encoder network with Keras functional API. It will
* start with an Input layer that will consume the source language integerized sentences
* then feed them to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
The output of the encoder will be the encoder_outputs and the encoder_state.
End of explanation
"""
decoder_inputs = Input(shape=(None,), name="decoder_input")
# TODO 2b
decoder_inputs_embedded = Embedding(
input_dim=TARGET_VOCAB_SIZE,
output_dim=EMBEDDING_DIM,
input_length=max_length_targ)(decoder_inputs)
decoder_rnn = GRU(
units=HIDDEN_UNITS,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
decoder_outputs, decoder_state = decoder_rnn(
decoder_inputs_embedded, initial_state=encoder_state)
"""
Explanation: We now implement the decoder network, which is very similar to the encoder network.
It will
* start with an Input layer that will consume the source language integerized sentences
* then feed that input to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
Important: The main difference with the encoder, is that the recurrent GRU layer will take as input not only the decoder input embeddings, but also the encoder_state as outputted by the encoder above. This is where the two networks are linked!
The output of the encoder will be the decoder_outputs and the decoder_state.
End of explanation
"""
decoder_dense = Dense(TARGET_VOCAB_SIZE, activation='softmax')
predictions = decoder_dense(decoder_outputs)
"""
Explanation: The last part of the encoder-decoder architecture is a softmax Dense layer that will create the next word probability vector or next word predictions from the decoder_output:
End of explanation
"""
# TODO 2c
model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=predictions)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
model.summary()
"""
Explanation: To be able to train the encoder-decoder network defined above, we now need to create a trainable Keras Model by specifying which are the inputs and the outputs of our problem. They should correspond exactly to what the type of input/output in our train and eval tf.data.Dataset since that's what will be fed to the inputs and outputs we declare while instantiating the Keras Model.
While compiling our model, we should make sure that the loss is the sparse_categorical_crossentropy so that we can compare the true word indices for the target language as outputted by our train tf.data.Dataset with the next word predictions vector as outputted by the decoder:
End of explanation
"""
STEPS_PER_EPOCH = len(input_tensor_train)//BATCH_SIZE
EPOCHS = 1
history = model.fit(
train_dataset,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=eval_dataset,
epochs=EPOCHS
)
"""
Explanation: Let's now train the model!
End of explanation
"""
if LOAD_CHECKPOINT:
encoder_model = load_model(os.path.join(MODEL_PATH, 'encoder_model.h5'))
decoder_model = load_model(os.path.join(MODEL_PATH, 'decoder_model.h5'))
else:
# TODO 3a
encoder_model = Model(inputs=encoder_inputs, outputs=encoder_state)
decoder_state_input = Input(shape=(HIDDEN_UNITS,), name="decoder_state_input")
# Reuses weights from the decoder_rnn layer
decoder_outputs, decoder_state = decoder_rnn(
decoder_inputs_embedded, initial_state=decoder_state_input)
# Reuses weights from the decoder_dense layer
predictions = decoder_dense(decoder_outputs)
decoder_model = Model(
inputs=[decoder_inputs, decoder_state_input],
outputs=[predictions, decoder_state]
)
"""
Explanation: Implementing the translation (or decoding) function
We can't just use model.predict(), because we don't know all the inputs we used during training. We only know the encoder_input (source language) but not the decoder_input (target language), which is what we want to predict (i.e., the translation of the source language)!
We do however know the first token of the decoder input, which is the <start> token. So using this plus the state of the encoder RNN, we can predict the next token. We will then use that token to be the second token of decoder input, and continue like this until we predict the <end> token, or we reach some defined max length.
So, the strategy now is to split our trained network into two independent Keras models:
an encoder model with signature encoder_inputs -> encoder_state
a decoder model with signature [decoder_inputs, decoder_state_input] -> [predictions, decoder_state]
This way, we will be able to encode the source language sentence into the vector encoder_state using the encoder and feed it to the decoder model along with the <start> token at step 1.
Given that input, the decoder will produce the first word of the translation, by sampling from the predictions vector (for simplicity, our sampling strategy here will be to take the next word to be the one whose index has the maximum probability in the predictions vector) along with a new state vector, the decoder_state.
At this point, we can feed again to the decoder the predicted first word and as well as the new decoder_state to predict the translation second word.
This process can be continued until the decoder produces the token <stop>.
This is how we will implement our translation (or decoding) function, but let us first extract a separate encoder and a separate decoder from our trained encoder-decoder model.
Remark: If we have already trained and saved the models (i.e, LOAD_CHECKPOINT is True) we will just load the models, otherwise, we extract them from the trained network above by explicitly creating the encoder and decoder Keras Models with the signature we want.
End of explanation
"""
def decode_sequences(input_seqs, output_tokenizer, max_decode_length=50):
"""
Arguments:
input_seqs: int tensor of shape (BATCH_SIZE, SEQ_LEN)
output_tokenizer: Tokenizer used to conver from int to words
Returns translated sentences
"""
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seqs)
# Populate the first character of target sequence with the start character.
batch_size = input_seqs.shape[0]
target_seq = tf.ones([batch_size, 1])
decoded_sentences = [[] for _ in range(batch_size)]
# TODO 4: Sampling loop
for i in range(max_decode_length):
output_tokens, decoder_state = decoder_model.predict(
[target_seq, states_value])
# Sample a token
sampled_token_index = np.argmax(output_tokens[:, -1, :], axis=-1)
tokens = utils_preproc.int2word(output_tokenizer, sampled_token_index)
for j in range(batch_size):
decoded_sentences[j].append(tokens[j])
# Update the target sequence (of length 1).
target_seq = tf.expand_dims(tf.constant(sampled_token_index), axis=-1)
# Update states
states_value = decoder_state
return decoded_sentences
"""
Explanation: Now that we have a separate encoder and a separate decoder, let's implement a translation function, to which we will give the generic name of decode_sequences (to stress that this procedure is general to all seq2seq problems).
decode_sequences will take as input
* input_seqs which is the integerized source language sentence tensor that the encoder can consume
* output_tokenizer which is the target languague tokenizer we will need to extract back words from predicted word integers
* max_decode_length which is the length after which we stop decoding if the <stop> token has not been predicted
Note: Now that the encoder and decoder have been turned into Keras models, to feed them their input, we need to use the .predict method.
End of explanation
"""
sentences = [
"No estamos comiendo.",
"Está llegando el invierno.",
"El invierno se acerca.",
"Tom no comio nada.",
"Su pierna mala le impidió ganar la carrera.",
"Su respuesta es erronea.",
"¿Qué tal si damos un paseo después del almuerzo?"
]
reference_translations = [
"We're not eating.",
"Winter is coming.",
"Winter is coming.",
"Tom ate nothing.",
"His bad leg prevented him from winning the race.",
"Your answer is wrong.",
"How about going for a walk after lunch?"
]
machine_translations = decode_sequences(
utils_preproc.preprocess(sentences, inp_lang),
targ_lang,
max_length_targ
)
for i in range(len(sentences)):
print('-')
print('INPUT:')
print(sentences[i])
print('REFERENCE TRANSLATION:')
print(reference_translations[i])
print('MACHINE TRANSLATION:')
print(machine_translations[i])
"""
Explanation: Now we're ready to predict!
End of explanation
"""
if not LOAD_CHECKPOINT:
os.makedirs(MODEL_PATH, exist_ok=True)
# TODO 3b
model.save(os.path.join(MODEL_PATH, 'model.h5'))
encoder_model.save(os.path.join(MODEL_PATH, 'encoder_model.h5'))
decoder_model.save(os.path.join(MODEL_PATH, 'decoder_model.h5'))
with open(os.path.join(MODEL_PATH, 'encoder_tokenizer.pkl'), 'wb') as fp:
pickle.dump(inp_lang, fp)
with open(os.path.join(MODEL_PATH, 'decoder_tokenizer.pkl'), 'wb') as fp:
pickle.dump(targ_lang, fp)
"""
Explanation: Checkpoint Model
Now let's us save the full training encoder-decoder model, as well as the separate encoder and decoder model to disk for latter reuse:
End of explanation
"""
def bleu_1(reference, candidate):
reference = list(filter(lambda x: x != '', reference)) # remove padding
candidate = list(filter(lambda x: x != '', candidate)) # remove padding
smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1
return nltk.translate.bleu_score.sentence_bleu(
reference, candidate, (1,), smoothing_function)
def bleu_4(reference, candidate):
reference = list(filter(lambda x: x != '', reference)) # remove padding
candidate = list(filter(lambda x: x != '', candidate)) # remove padding
smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1
return nltk.translate.bleu_score.sentence_bleu(
reference, candidate, (.25, .25, .25, .25), smoothing_function)
"""
Explanation: Evaluation Metric (BLEU)
Unlike say, image classification, there is no one right answer for a machine translation. However our current loss metric, cross entropy, only gives credit when the machine translation matches the exact same word in the same order as the reference translation.
Many attempts have been made to develop a better metric for natural language evaluation. The most popular currently is Bilingual Evaluation Understudy (BLEU).
It is quick and inexpensive to calculate.
It allows flexibility for the ordering of words and phrases.
It is easy to understand.
It is language independent.
It correlates highly with human evaluation.
It has been widely adopted.
The score is from 0 to 1, where 1 is an exact match.
It works by counting matching n-grams between the machine and reference texts, regardless of order. BLUE-4 counts matching n grams from 1-4 (1-gram, 2-gram, 3-gram and 4-gram). It is common to report both BLUE-1 and BLUE-4
It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.
The NLTK framework has an implementation that we will use.
We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.
For more info: https://machinelearningmastery.com/calculate-bleu-score-for-text-python/
End of explanation
"""
%%time
num_examples = len(input_tensor_val)
bleu_1_total = 0
bleu_4_total = 0
for idx in range(num_examples):
# TODO 5
reference_sentence = utils_preproc.int2word(
targ_lang, target_tensor_val[idx][1:])
decoded_sentence = decode_sequences(
input_tensor_val[idx:idx+1], targ_lang, max_length_targ)[0]
bleu_1_total += bleu_1(reference_sentence, decoded_sentence)
bleu_4_total += bleu_4(reference_sentence, decoded_sentence)
print('BLEU 1: {}'.format(bleu_1_total/num_examples))
print('BLEU 4: {}'.format(bleu_4_total/num_examples))
"""
Explanation: Let's now average the bleu_1 and bleu_4 scores for all the sentence pairs in the eval set. The next cell takes some time to run, the bulk of which is decoding the 6000 sentences in the validation set. Please wait unitl completes.
End of explanation
"""
|
d00d/quantNotebooks
|
Notebooks/quantopian_research_public/notebooks/lectures/Leverage/notebook.ipynb
|
unlicense
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from __future__ import division
capital_base = 100000
r_p = 0.05 # Aggregate performance of assets in the portfolio
r_no_lvg = capital_base * r_p
print 'Portfolio returns without leverage: {0}'.format(r_no_lvg)
"""
Explanation: Leverage
by Maxwell Margenot and Delaney Granizo-Mackenzie
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
What is leverage?
Leverage is borrowing money, then investing that money into some trading strategy so as to effectively multiply your initial capital base by some amount.
More Specifically
Leverage is reinvesting debt to gain a greater return on an investment. We include debt in our asset portfolio as a financial instrument that pays one large cash flow upfront (the loan) and gradually pays negative cash flows out over time. The size of these negative cash flows is determined by the interest rate on our debt. The large upfront cashflow allows us to supplement our capital base. In this way we use our capital and our leverage together to purchase the assets necessary to execute our trading strategy.
Why would you do this?
If you are confident in a strategy and believe it to be low risk, you can put more money than you currently have into that strategy in an effort to multiply your returns. You of course have to have confidence that the returns on your strategy will exceed the interest rate on your debt.
Risk Adjusted Returns
We'll talk about this more later, but risk adjusted return is expressed in the Sharpe Ratio (excess returns/risk). A strategy with a high Sharpe Ratio may not have good absolute returns, say $2\%$ annually, but if the Sharpe Ratio is high the risk will also be correspondingly low. Multiplying the capital base multiplies both the risk and returns of the strategy, keeping the Sharpe Ratio the same. See lower in the notebook for more.
How do I use leverage?
In the context of algorithmic trading we are specifically interested in margin and trading on margin. Trading on margin is a type of leverage as it involves taking out a loan from your broker and adding it to your capital base in order to increase the returns of your trading strategy. Since you are borrowing money to invest, you ideally only trade on margin when the returns of your strategy are greater than the interest that you pay on that debt. At many points in the execution of a trading strategy, you may attempt to make trades that would exceed your current capital. At this point, the broker checks if you are authorized to trade on margin (borrow money) and, if so, lends you the money necessary to execute the trade. Each person’s margin account will have different terms depending on their broker, size of account, risk of strategies, and other factors.
If you have a profitable strategy, using leverage can prop up the amount of money that you make overall by padding the money that you are working with. The involvement of the broker is an important factor to consider when constructing algorithmic trading strategies because your trading strategy will borrow automatically as needed when you need more money to cover a position. You may want to limit how much leverage your strategy can take on so that you are not borrowing more than you are comfortable with.
We measure the current leverage of a portfolio by examining the leverage ratio. The leverage ratio of an algorithm is calculated as the sum of your debt and your capital base divided by your capital base. We limit the amount of leverage that our strategy uses by limiting the leverage ratio.
$$ \text{Leverage Ratio} = \frac{\text{Debt} + \text{Capital Base}}{\text{Capital Base}}$$
Let's look at a very simple example of how introducing leverage can affect a portfolio. Consider a single period model, consisting of today and tomorrow, in which we receive our returns tomorrow.
End of explanation
"""
debt = 100000
r_lvg = (capital_base + debt) * r_p
r_lvg_pct = r_lvg / capital_base
# Returns are calculated over the initial capital base
# Think of the debt as an asset purchased and added to the portfolio
lvg_ratio = (debt + capital_base) / capital_base
print 'Portfolio returns with leverage: {0}'.format(r_lvg)
print 'Percentage returns with {1}x leverage: {0}'.format(r_lvg_pct, lvg_ratio)
"""
Explanation: This is what portfolio returns look like without leverage. Let's add some debt, leveraging the portfolio, and see how the returns change.
End of explanation
"""
capital_base = 100000
debt = 50000
i = 0.02
r_p = 0.05
int_pmt = i * debt
r_lvg = (capital_base + debt) * r_p
r_total = r_lvg - int_pmt
r_pct_lvg = r_total / capital_base
lvg_ratio = (capital_base + debt) / capital_base
print 'Portfolio returns with leverage and interest: {0}'.format(r_total)
print 'Percentage returns with {1}x leverage and {2}% interest: {0}'.format(r_pct_lvg, lvg_ratio, i * 100)
"""
Explanation: This is the ideal situation, that someone would lend you money without asking for anything in return. It results in double the effective additive returns of an unlevered strategy, which is just delightful. However, we know that in the real world there is no way that this would actually happen. Let's consider what happens when we add in the effects of an interest payment in our one-period model.
End of explanation
"""
bt = get_backtest('57e297562a42c9103c11a920')
recorded_vars = bt.recorded_vars
leverage = recorded_vars['leverage']
daily_performance = bt.daily_performance
daily_returns = daily_performance['returns']
plt.plot(leverage);
plt.title("Leverage Ratio of a Trading Algorithm Over Time")
plt.xlabel('Date');
plt.ylabel('Leverage Ratio');
plt.legend();
"""
Explanation: That makes a lot more sense. It would be unreasonable for us to assume that we can add someone else's money to our portfolio without some sort of repayment schedule. Our returns are not as high as they were in the levered portfolio with no interest, but we are still gaining a greater amount of wealth by using leverage, despite the interest rates. As long as we have a reliable strategy that can make sufficient returns to offset the cost of debt we will be able to benefit from levering a portfolio.
Our additive returns have increased over our unlevered strategy, but overall we are gaining a lower percentage return. This is not entirely a bad thing, as with a larger amount of money to trade on we are able to add more overall value to our portfolio. However, if we are not careful with how we manage leverage, we could potentially end up spending all of our profits trying to pay off the interest that we accrued to make them in the first place.
This single-period model is only a small piece of the story. Loans are rarely, if ever, paid off in one period. Payments are spread out over the life of a loan, ensuring that you do not simply get the money for free. In this context, to properly earn a profit using the leverage, we obviously have to be making more money than we are paying out.
How do I get leverage?
Naturally, borrowing money to do anything will incur interest payments and additional fees. When trading with leverage, or on margin, these loans will come from your broker. Many brokers are loathe to part with their cash without a good reason. Using leverage with high volatility strategies can be dangerous unless you have a high tolerance for risk. Even if you lose money, you still have to pay the broker!
Leverage in an algorithm
Handling leverage gets significantly more complicated when we are dealing with an algorithm. Every time an algorithm rebalances its portfolio or makes a trade, there is a possibility of affecting the leverage ratio. If there isn't enough cash on hand to cover its positions it will need to borrow more. In the same vein, it may be utilizing overall less cash for the next set of trades, decreasing the leverage ratio.
This is a backtest from our template long-short algorithm attached to our long-short equity lecture over the year 2015 (Note that an upgrade to this template algorithm is coming soon).
End of explanation
"""
plt.plot(daily_returns);
plt.xlabel('Date');
plt.ylabel('Daily Returns');
plt.legend();
print "Mean return: ", daily_returns.mean()
print "Volatility: ", daily_returns.std()
"""
Explanation: Here is the leverage ratio of this algorithm plotted over time. Notice how it jumps around quite frequently. The ratio is below $1$ when it is not using all of its base capital and it spikes above $1$ whenever it makes a trade on margin. The algorithm associated with this leverage ratio is a long-short equity algorithm based on a combination of fundamental factors. For an overview of how a long-short equity strategy works, please see the lectures page.
A key feature of this sort of strategy is that it can trade hundreds, sometimes even thousands of equities at once. As such, we run the risk of incurring some fairly large rebalancing costs, depending on how frequently we rebalance. This algorithm specifically rebalances on a monthly basis. As we can see on the above graph of the leverage ratio, a lot of the largest changes occur aroud the start of each month.
To see how the rebalancing structure and maximum leverage can affect the leverage ratio of the algorithm when it is executed, go into the the template and modify these parameters. Changing the type of algorithm will also drastically affect how it uses leverage. Feel free to experiment.
Things to try:
1. Change the timing of the rebalancing between daily, monthly, and weekly.
2. Modify the amount of leverage that the portfolio is allowed to take on
3. Restrict the universe that the algorithm trades within by applying more filters to your trading universe.
4. Instead of making all portfolio weights equally-weighted, use a portfolio optimization scheme like Markowitz or Mean Absolute Deviation portfolio optimization.
Here are the associated returns of this algorithm. The volatility of these returns is important to take into account when examining the leverage of a strategy.
End of explanation
"""
# Note these are all expected returns. You need to validate that
# your strategy will continue to produce these returns and volatility using other methods.
# Just measuring historically is a very poor predictor of future performance.
# We'll for now assume you've already validated that you can expect to keep seeing numbers similar to these.
strat_A_ann_return = 0.22
strat_A_ann_vol = 0.15
strat_B_ann_return = 0.05
strat_B_ann_vol = 0.02
# We'll assume a risk free rate of 0.02
risk_free_rate = 0.02
print 'Strategy A Sharpe: %s' % ((strat_A_ann_return - risk_free_rate) / strat_A_ann_vol)
print 'Strategy B Sharpe: %s' % ((strat_B_ann_return - risk_free_rate) / strat_B_ann_vol)
# Add in leverage to B
leverage = 3
# Expressed in returns
print 'Strategy B Sharpe: %s' % (
(strat_B_ann_return * leverage - risk_free_rate * leverage) / (strat_B_ann_vol * leverage)
)
print 'Strategy B Levered Annual Returns: %s' % (
(strat_B_ann_return * leverage - risk_free_rate * leverage)
)
"""
Explanation: Using leverage can be dangerous when you are dealing with a more volatile strategy. Because you are trading with borrowed money, we are on the hook to return it. We have to make sure that the broker gets his money back before we get our profit. If we end up in a position where we get a margin call, we have to pony up more funds if we want to hold our positions. Monitoring your strategy's volatility and ensuring you are only taking on palatable amounts of debt are key aspects of determining the quality of your trading strategy.
Risk-Adjusted Returns
Comparing returns of different investment opportunities without taking risk into account is meaningless. Some return streams may be higher than others, but this may be due to the risks taken on rather than any merit in the strategy itself. Taking on higher risk should in theory lead to higher returns, but then how do we judge the quality of these returns for the amount of risk we have to handle? This is where risk-adjusted returns and methods of risk-adjustment come into play. If we adjust several return streams for risk then we can consider them on equal footing, independent of the risk. This allows us to effectively compare and determine which return streams are the best for a given desired risk profile.`
One of the most prominent risk-adjusted measures is the Sharpe Ratio, defined as follows.
$$ \text{Sharpe Ratio} = \frac{r_p - r_f}{\sigma_p} $$
The Sharpe Ratio essentially normalizes the returns of a portfolio, giving us a metric that we can use as a measure of quality relative to other revenue streams. A higher Sharpe Ratio indicates that you are getting more return relative to the risk that your strategy is taking on.
Compare Strategies by Sharpe Ratio and then Lever as Needed
In general you want to compare the Sharpe Ratio of two strategies you may be interested in. Pick the strategy with the better Sharpe Ratio and then use leverage to multiply the returns up to where you want them. Assuming constraints like capital capacity don't kick in, you can add more money through leverage and bring a $2\%$ per year strategy up to a $10\%$ per year strategy while maintaining the same Sharpe, or invest a fraction of your available capital to bring a $20\%$ per year strategy's risk down to acceptable levels.
Example
We'll show a simple example using real numbers.
End of explanation
"""
|
theandygross/HIV_Methylation
|
Benchmarks/Cell_Composition_Bechmark.ipynb
|
mit
|
import os
if os.getcwd().endswith('Benchmarks'):
os.chdir('..')
"""
Explanation: Exploration of Cell Composition
We know that cell composition is a key confounder when looking at changes in the methylome and how they relate to HIV infected patients. It is well understood that individuals with HIV have lower CD4 counts, for this reason we have pretty comprehesive blood work for these cases. Unfortunately we do not have the same blood work for the controls. For this reason we are using the estimateCellCounts function in the R minfi package to get cell composition for all of our samples.
The estimateCellCounts function uses differentially methylated probes across different flow-sorted cell populations to estimate blood composition via mixture modeling. For this we need arrays measuring methylation of each cell population we want to estimate. While we do not have the exact same cell-types reported in our blood composition report, we can benchmark this method against those that we do have.
End of explanation
"""
import NotebookImport
from HIV_Age_Advancement import *
labs['CD4 Absolute'].hist()
"""
Explanation: Imports and helper functions from Imports notebook.
End of explanation
"""
duration = duration.ix[pred_c.index]
screen_feature(duration=='Control', kruskal_pandas, cell_counts.T, align=False)
fig, axs = subplots(2,1, figsize=(6,9), sharex=True)
o = ['Control','HIV Short','HIV Long']
violin_plot_pandas(duration, cell_counts.CD4T, order=o, ax=axs[0])
violin_plot_pandas(duration, cell_counts.CD8T, order=o, ax=axs[1])
for ax in axs:
prettify_ax(ax)
ax.set_xlabel('')
axs[0].set_ylabel('CD4T Cell Percentage')
axs[1].set_ylabel('CD8T Cell Percentage')
fig.tight_layout()
fig.savefig(FIGDIR + 'tcells.png', dpi=300)
((cell_counts.CD8T + .01) / (cell_counts.CD4T + .01)).groupby(duration=='Control').mean()
violin_plot_pandas(duration, np.log2((cell_counts.CD4T + .01) / (cell_counts.CD8T + .01)),
order=['Control','HIV Short','HIV Long'])
"""
Explanation: Cell Composition from estimateCellCounts
Looking for changes in cell composition with HIV
End of explanation
"""
cell_types = ['Neutrophil %','Lymphocyte %','Monocyte %','Eosinophil %','Basophil %']
labs[cell_types].sum(1).order()
"""
Explanation: Cell Counts from the Labs
End of explanation
"""
keepers = labs.index.difference(['RG065','RG175','RG279','RA182','RM285'])
keepers = keepers.intersection(duration.index)
l3 = labs[cell_types].div(labs[cell_types].sum(1), axis=0) * 100
l3 = l3.ix[keepers]
reg_cd4 = linear_regression(labs['CD4 Absolute'].ix[keepers],
cell_counts.CD4T)
reg_cd8 = linear_regression(labs['CD8 Absolute'].ix[keepers],
cell_counts.CD8T)
import seaborn as sns
sns.set_context("paper", font_scale=1.5, rc={"lines.linewidth": 2.5})
sns.set_style("white")
fig, axs2 = subplots(2,3, figsize=(12,8))
axs = axs2[0]
ll = {'color':'black','alpha':.7, 'ls':'-'}
plot_regression((l3['Neutrophil %'] + l3['Eosinophil %'] + l3['Basophil %']).dropna(),
cell_counts.Gran.ix[keepers] * 100, ax=axs[0],
line_args=[ll,{'alpha':0}],
s=30, color='grey', alpha=1, edgecolor='black')
axs[0].set_xlabel('Estimated granulocytes (%)')
axs[0].set_ylabel('Measured granulocytes (%)');
plot_regression(l3['Lymphocyte %'].ix[keepers],
cell_counts[['Bcell','NK','CD4T','CD8T']].sum(1) * 100,
ax=axs[1], line_args=[ll,{'alpha':0}],
s=30, color='grey', alpha=1, edgecolor='black')
axs[1].set_xlabel('Estimated lymphocytes (%)')
axs[1].set_ylabel('Measured lymphocytes (%)');
plot_regression(l3['Monocyte %'].ix[keepers], cell_counts['Mono'] * 100,
ax=axs[2], line_args=[ll,{'alpha':0}],
s=30, color='grey', alpha=1, edgecolor='black')
axs[2].set_xlabel('Estimated monocytes (%)')
axs[2].set_ylabel('Measured monocytes (%)')
axs = axs2[1]
plot_regression((labs['CD4 %'].ix[keepers] * l3['Lymphocyte %']) / 100.,
cell_counts.CD4T * 100,
ax=axs[0], line_args=[ll,{'alpha':0}],
s=30, color='grey', alpha=1, edgecolor='black')
axs[0].set_xlabel('Estimated CD4 T cells (%)')
axs[0].set_ylabel('Measured CD4 T cells (%)')
plot_regression((labs['CD8 %'].ix[keepers] * l3['Lymphocyte %']) / 100.,
cell_counts.CD8T * 100,
ax=axs[1], line_args=[ll,{'alpha':0}],
s=30, color='grey', alpha=1, edgecolor='black')
axs[1].set_xlabel('Estimated CD8 T cells (%)')
axs[1].set_ylabel('Measured CD8 T cells (%)')
plot_regression(np.log2(labs['CD4/CD8 ratio'].ix[keepers]),
np.log2(((cell_counts.CD4T + .01) / (cell_counts.CD8T + .01))),
ax=axs[2], line_args=[ll,{'alpha':0}],
s=30, color='grey', alpha=1, edgecolor='black')
axs[2].set_ylabel('Log2 CD4/CD8 (estimated)')
axs[2].set_xlabel('Log2 CD4/CD8 (measured)')
axs[2].set_xbound(-5.5,3)
axs[2].set_ybound(-7.5,3)
letters = list(map(chr, range(97, 123)))[:6]
for i,ax in enumerate(axs2.flatten()):
ax.text(-0.1, 1.1, letters[i], transform=ax.transAxes,
fontsize=20, fontweight='bold', va='top', ha='right')
prettify_ax(ax)
fig.tight_layout()
fig.savefig(FIGDIR + 'figS1.pdf', transparent=False)
c1 = (labs['CD4 %'].ix[keepers] * l3['Lymphocyte %']) / 100.
c2 = cell_counts.CD4T * 100
fig, ax = subplots(figsize=(2.5,2.5))
series_scatter((c1).ix[ti(c2 < 2)], cell_counts.NK * 100, ax=ax)
prettify_ax(ax)
ax.set_xlabel('CD4 (measured)')
ax.set_ylabel('NK (estimated)')
fig.tight_layout()
fig.savefig(FIGDIR + 'inset.png', dpi=300)
"""
Explanation: Dropping five patients because they don't look Kosher
Renormalizing cell percentages because some don't sum to 100%
End of explanation
"""
betas = pd.read_hdf(HDFS_DIR + 'dx_methylation.h5', 'betas')
betas = betas['s2'].ix[:, duration.index]
betas = betas.groupby(level=0).first()
flow_sorted_data = pd.read_hdf(HDFS_DIR + 'methylation_annotation.h5','flow_sorted_data')
cell_type = pd.read_hdf(HDFS_DIR + 'methylation_annotation.h5', 'label_map')
n2 = flow_sorted_data.groupby(cell_type, axis=1).median()
avg = n2[cell_counts.columns].dot(cell_counts.T)
b1 = logit_adj(betas).T.corrwith(cell_counts.CD8T)
b2 = logit_adj(df_hiv).ix[:, betas.columns].T.corrwith(cell_counts.CD8T)
ss = avg.std(1)
b1 = b1.groupby(level=0).first()
b2 = b2.groupby(level=0).first()
idx = b1.abs() > .01
(b1[idx].abs() < b2[idx].abs()).value_counts()
from scipy.stats import binom_test
binom_test((b1.abs() < b2.abs()).value_counts())
fig, axs = subplots(1,2, figsize=(9,3))
ss.clip_upper(.05).hist(bins=30, ax=axs[0])
axs[0].set_xticks([0,.01,.02,.03,.04,.05])
axs[0].set_xticklabels([0,.01,.02,.03,.04,'0.05+'])
axs[0].set_xlabel('adjustment magnitude')
axs[0].set_ylabel('# of patients')
axs[1].hexbin(*match_series(b1.abs(), b2.abs()), gridsize=50,
bins='log')
axs[1].set_aspect(1)
axs[1].plot([0,.9],[0,.9], color='grey', lw=4, ls='--', dash_capstyle='round')
axs[1].set_xlabel('CD8T coorelation \n(pre-adjustment)')
axs[1].set_ylabel('CD8T coorelation \n(post-adjustment)')
for ax in axs:
prettify_ax(ax)
fig.tight_layout()
flow_sorted_data.T.groupby(cell_type == 'CD8T').mean().diff().ix[1].order().ix[500:].head(10)
p = 'cg19163395'
diff = avg.ix[p].ix[duration.index] - avg.ix[p].median()
diff.hist()
series_scatter(diff, cell_counts.CD8T)
fig, axs = subplots(1,2, figsize=(9,3))
cell_counts.CD8T.ix[duration.index].hist(color='grey', ax=axs[0])
axs[0].set_xlabel('% CD8T (estimated)')
axs[0].set_xticks([0,.1,.2,.3])
axs[0].set_ylabel('# of patients')
box_plot_pandas(cell_type[cell_type.isin(cell_counts.columns)],
flow_sorted_data.ix[p], ax=axs[1],
order=['CD8T','NK','Bcell','CD4T','Gran','Mono'])
axs[0].set_xlim(-.02,.4)
for ax in axs:
prettify_ax(ax)
fig.tight_layout()
fig.savefig(FIGDIR + 'Adjust_ab.pdf')
fig, axs = subplots(1,2, figsize=(9,3), sharey=True)
series_scatter(cell_counts.CD8T, betas.ix[p].ix[diff.index],
ax=axs[0], color=colors[1], alpha=.7, s=15)
series_scatter(cell_counts.CD8T, betas.ix[p].ix[diff.index] - diff,
ax=axs[1], color=colors[1], alpha=.7, s=15)
for ax in axs:
prettify_ax(ax)
ax.set_xlabel('% CD8T (estimated)')
ax.set_xticks([0,.1,.2,.3])
ax.set_xlim(-.02,.4)
fig.tight_layout()
fig.savefig(FIGDIR + 'Adjust_cd.pdf')
fig, axs = subplots(1,2, figsize=(9,4))
ss.clip_upper(.05).hist(bins=30, ax=axs[0])
axs[0].set_xticks([0,.01,.02,.03,.04,.05])
axs[0].set_xticklabels([0,.01,.02,.03,.04,'0.05+'])
axs[0].set_xlabel('adjustment magnitude')
axs[0].set_ylabel('# of probes')
axs[1].hexbin(*match_series(b1.abs(), b2.abs()), gridsize=50,
bins='log')
axs[1].set_aspect(1)
axs[1].plot([0,.9],[0,.9], color='grey', lw=4, ls='--', dash_capstyle='round')
axs[1].set_xlabel('CD8T coorelation \n(pre-adjustment)')
axs[1].set_ylabel('CD8T coorelation \n(post-adjustment)')
for ax in axs:
prettify_ax(ax)
fig.tight_layout()
fig.savefig(FIGDIR + 'Adjust_ef.pdf')
fig, axs = subplots(1,2, figsize=(4,3), sharey=True)
hiv = (duration=='Control').map({True:'HIV+',False:'HIV-'})
hiv.name = ''
box_plot_pandas(hiv, betas.ix[p], ax=axs[0])
box_plot_pandas(hiv, (betas - diff).ix[p], ax=axs[1])
for ax in axs:
prettify_ax(ax)
ax.set_xticklabels(['HIV-','HIV+'])
fig.tight_layout()
from Benchmarks.Model_Comparison_MF import *
pts = mc_adj_c.index
rr = screen_feature(age.ix[pts], pearson_pandas, cell_counts.T, align=False)
rr
pts = mc_adj_c.index
rr = screen_feature(mc_adj_c.ix[pts] - age, pearson_pandas, cell_counts.T, align=False)
rr
fig, ax = subplots(figsize=(5,3))
rr.rho.plot(kind='bar', ax=ax)
ax.set_ylabel('coorelation with age')
ax.set_xlim(-.5, 5.5)
prettify_ax(ax)
fig, ax = subplots(figsize=(5,3))
rr.rho.plot(kind='bar', ax=ax)
ax.set_ylabel('coorelation with age')
ax.set_xlim(-.5, 5.5)
prettify_ax(ax)
"""
Explanation: Mockup of Cell-Composition Adjustment
Real adjustment is done in BMIQ_Normalization notebook.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/messy-consortium/cmip6/models/sandbox-1/toplevel.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
python/how_to_use_default_dicts.ipynb
|
mit
|
import collections
"""
Explanation: Title: How To Use Default Dicts
Slug: how_to_use_default_dicts
Summary: How To Use Default Dicts in Python.
Date: 2016-01-23 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Interesting in learning more? Check out Fluent Python
Preliminaries
End of explanation
"""
# Create a defaultdict with the default value of 0 (int's default value is 0)
arrests = collections.defaultdict(int)
"""
Explanation: Create A DefaultDict
Default Dicts work just like regular dictionaries, except a key is called that doesn't have a value, a default value (note: value, not key) is supplied.
End of explanation
"""
# Add an entry of a person with 10 arrests
arrests['Sarah Miller'] = 10
# View dictionary
arrests
"""
Explanation: Add A New Key With A Value
End of explanation
"""
# Add an entry of a person with no value for arrests,
# thus the default value is used
arrests['Bill James']
# View dictionary
arrests
"""
Explanation: Add A New Key Without A Value
End of explanation
"""
|
ChadFulton/statsmodels
|
examples/notebooks/regression_plots.ipynb
|
bsd-3-clause
|
%matplotlib inline
from __future__ import print_function
from statsmodels.compat import lzip
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import ols
"""
Explanation: Regression Plots
End of explanation
"""
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
prestige.head()
prestige_model = ols("prestige ~ income + education", data=prestige).fit()
print(prestige_model.summary())
"""
Explanation: Duncan's Prestige Dataset
Load the Data
We can use a utility function to load any R dataset available from the great <a href="https://vincentarelbundock.github.io/Rdatasets/">Rdatasets package</a>.
End of explanation
"""
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.influence_plot(prestige_model, ax=ax, criterion="cooks")
"""
Explanation: Influence plots
Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix.
Externally studentized residuals are residuals that are scaled by their standard deviation where
$$var(\hat{\epsilon}i)=\hat{\sigma}^2_i(1-h{ii})$$
with
$$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$
$n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix
$$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$
The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence.
End of explanation
"""
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.plot_partregress("prestige", "income", ["income", "education"], data=prestige, ax=ax)
fix, ax = plt.subplots(figsize=(12,14))
fig = sm.graphics.plot_partregress("prestige", "income", ["education"], data=prestige, ax=ax)
"""
Explanation: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br />
therefore, large influence.
Partial Regression Plots
Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br />
Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br />
independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br />
In a partial regression plot, to discern the relationship between the response variable and the $k$-th variabe, we compute <br />
the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br />
$X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br />
of the former versus the latter residuals. <br />
The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br />
are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br />
individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br />
with their observation label. You can also see the violation of underlying assumptions such as homooskedasticity and <br />
linearity.
End of explanation
"""
subset = ~prestige.index.isin(["conductor", "RR.engineer", "minister"])
prestige_model2 = ols("prestige ~ income + education", data=prestige, subset=subset).fit()
print(prestige_model2.summary())
"""
Explanation: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
End of explanation
"""
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_partregress_grid(prestige_model, fig=fig)
"""
Explanation: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
points, but you can use them to identify problems and then use plot_partregress to get more information.
End of explanation
"""
fig, ax = plt.subplots(figsize=(12, 8))
fig = sm.graphics.plot_ccpr(prestige_model, "education", ax=ax)
"""
Explanation: Component-Component plus Residual (CCPR) Plots
The CCPR plot provides a way to judge the effect of one regressor on the <br />
response variable by taking into account the effects of the other <br />
independent variables. The partial residuals plot is defined as <br />
$\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br />
$X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br />
is highly correlated with any of the other independent variables. If this <br />
is the case, the variance evident in the plot will be an underestimate of <br />
the true variance.
End of explanation
"""
fig = plt.figure(figsize=(12, 8))
fig = sm.graphics.plot_ccpr_grid(prestige_model, fig=fig)
"""
Explanation: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
End of explanation
"""
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_regress_exog(prestige_model, "education", fig=fig)
"""
Explanation: Regression Plots
The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor.
End of explanation
"""
fig, ax = plt.subplots(figsize=(12, 8))
fig = sm.graphics.plot_fit(prestige_model, "education", ax=ax)
"""
Explanation: Fit Plot
The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable.
End of explanation
"""
#dta = pd.read_csv("http://www.stat.ufl.edu/~aa/social/csv_files/statewide-crime-2.csv")
#dta = dta.set_index("State", inplace=True).dropna()
#dta.rename(columns={"VR" : "crime",
# "MR" : "murder",
# "M" : "pctmetro",
# "W" : "pctwhite",
# "H" : "pcths",
# "P" : "poverty",
# "S" : "single"
# }, inplace=True)
#
#crime_model = ols("murder ~ pctmetro + poverty + pcths + single", data=dta).fit()
dta = sm.datasets.statecrime.load_pandas().data
crime_model = ols("murder ~ urban + poverty + hs_grad + single", data=dta).fit()
print(crime_model.summary())
"""
Explanation: Statewide Crime 2009 Dataset
Compare the following to http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter4/statareg_self_assessment_answers4.htm
Though the data here is not the same as in that example. You could run that example by uncommenting the necessary cells below.
End of explanation
"""
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_partregress_grid(crime_model, fig=fig)
fig, ax = plt.subplots(figsize=(12,8))
fig = sm.graphics.plot_partregress("murder", "hs_grad", ["urban", "poverty", "single"], ax=ax, data=dta)
"""
Explanation: Partial Regression Plots
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,6))
fig = sm.graphics.plot_leverage_resid2(crime_model, ax=ax)
"""
Explanation: Leverage-Resid<sup>2</sup> Plot
Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot.
End of explanation
"""
fig, ax = plt.subplots(figsize=(8,6))
fig = sm.graphics.influence_plot(crime_model, ax=ax)
"""
Explanation: Influence Plot
End of explanation
"""
from statsmodels.formula.api import rlm
rob_crime_model = rlm("murder ~ urban + poverty + hs_grad + single", data=dta,
M=sm.robust.norms.TukeyBiweight(3)).fit(conv="weights")
print(rob_crime_model.summary())
#rob_crime_model = rlm("murder ~ pctmetro + poverty + pcths + single", data=dta, M=sm.robust.norms.TukeyBiweight()).fit(conv="weights")
#print(rob_crime_model.summary())
"""
Explanation: Using robust regression to correct for outliers.
Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples.
End of explanation
"""
weights = rob_crime_model.weights
idx = weights > 0
X = rob_crime_model.model.exog[idx.values]
ww = weights[idx] / weights[idx].mean()
hat_matrix_diag = ww*(X*np.linalg.pinv(X).T).sum(1)
resid = rob_crime_model.resid
resid2 = resid**2
resid2 /= resid2.sum()
nobs = int(idx.sum())
hm = hat_matrix_diag.mean()
rm = resid2.mean()
from statsmodels.graphics import utils
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(resid2[idx], hat_matrix_diag, 'o')
ax = utils.annotate_axes(range(nobs), labels=rob_crime_model.model.data.row_labels[idx],
points=lzip(resid2[idx], hat_matrix_diag), offset_points=[(-5,5)]*nobs,
size="large", ax=ax)
ax.set_xlabel("resid2")
ax.set_ylabel("leverage")
ylim = ax.get_ylim()
ax.vlines(rm, *ylim)
xlim = ax.get_xlim()
ax.hlines(hm, *xlim)
ax.margins(0,0)
"""
Explanation: There isn't yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888)
End of explanation
"""
|
GoogleCloudDataproc/spark-bigquery-connector
|
examples/notebooks/Distribute_Generic_Functions.ipynb
|
apache-2.0
|
%reload_ext google.cloud.bigquery
%%bigquery pd_results --use_bqstorage_api
SELECT original_url, title
FROM `bigquery-public-data.open_images.images`
WHERE license = 'https://creativecommons.org/licenses/by/2.0/'
LIMIT 10
#review what our image database contains.
import pandas as pd
pd.set_option('display.max_colwidth', None)
pd_results.head()
"""
Explanation: <h1>Distribute functions across a BigQuery dataset using Spark</h1>
<h3>Problem: As a PM, I give lots of public presentations and I want to make sure I use images that have an open license</h3>
<img src="https://storage.googleapis.com/kaggle-datasets-images/21870/28108/ef05bb21ad2ece2c6af034ceced30239/dataset-cover.jpeg?t=2018-04-16-23-00-50" height=400 width=400>
<h4> BigQuery Public Datasets - Open Images: 9 million URLs of open images (with labels across 6,000 categories)</h4>
<h3>For smaller datasets, can use BigQuery magic and python</h3>
End of explanation
"""
#function that makes is super easy to abstract some high confidence labels about my image.
from google.cloud import vision
def AnnotateHighConfidenceLabelsFromImage(image_uri):
client = vision.ImageAnnotatorClient()
request = {
'image': {
'source': {'image_uri': image_uri},
},
}
response = client.annotate_image(request)
high_confidence_labels = []
for la in response.label_annotations:
if float(la.score * 100) > 90.0:
high_confidence_labels.append(la.description)
if len(high_confidence_labels) < 1:
high_confidence_labels.append("No labels detected")
return str(high_confidence_labels)
#for 10 images, no problem to simply loop through them to get the labels.
for image in pd_results['original_url']:
labels = AnnotateHighConfidenceLabelsFromImage(image)
print(labels)
"""
Explanation: Looks like a great set of images but how do I find what I need? What's a DSC-4918?
End of explanation
"""
#but what happens when I need to run that label extractor against the full dataset of images.
no_limit_query = "SELECT original_url, title FROM `bigquery-public-data.open_images.images` WHERE license = 'https://creativecommons.org/licenses/by/2.0/' LIMIT 100"
# use Spark to load full dataset into Spark Dataframe. Setup Spark Session with BQ storage connector
from pyspark.sql import SparkSession
spark = SparkSession.builder.config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2." + str(12) + ":0.18.0") \
.enableHiveSupport() \
.getOrCreate()
#Use this function to push the processing of the query back to BQ but still use BQ Storage Connector to
#pull back data in parallel and directly into a Spark DF that can handle the size.
from google.cloud import bigquery
from pyspark import StorageLevel
def bq2df(QUERY):
bq = bigquery.Client()
query_job = bq.query(QUERY)
query_job.result()
df = spark.read.format('bigquery') \
.option('dataset', query_job.destination.dataset_id) \
.load(query_job.destination.table_id) \
.persist(StorageLevel.MEMORY_AND_DISK)
return df
df = bq2df(no_limit_query)
print(df.count())
df.printSchema()
#I'm now going to Spark-ify my python function with no code changes.
from pyspark.sql.functions import udf
@udf("string")
def AnnotateHighConfidenceLabelsFromImage_UDF(image_uri):
from google.cloud import vision
client = vision.ImageAnnotatorClient()
request = {
'image': {
'source': {'image_uri': image_uri},
},
}
response = client.annotate_image(request)
high_confidence_labels = []
for la in response.label_annotations:
if float(la.score * 100) > 90.0:
high_confidence_labels.append(la.description)
if len(high_confidence_labels) < 1:
high_confidence_labels.append("No labels detected")
return str(high_confidence_labels)
df_results = df.select("original_url", "title",\
AnnotateHighConfidenceLabelsFromImage_UDF("original_url").alias("labels"))\
.cache()
#at this point, might make sense to save this table out to my hive metastore to avoid re-processing all the images
#df_results.write.saveAsTable("HighConfidenceLabelsAndImages")
df_results.show(10, truncate=False)
from pyspark.sql.functions import col
df_results.where(col("labels").contains("Bird")).show(truncate=False)
"""
Explanation: <h3>Expanding to the full corpus of images will require scaling with Spark</h3>
End of explanation
"""
|
radhikapc/foundation-homework
|
homework05/Homework05_Spotify_radhika_graded.ipynb
|
mit
|
import requests
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
#Lil_data
Lil_data.keys()
Lil_data['artists'].keys()
Lil_artists = Lil_data['artists']['items']
"""
Explanation: Grade: 6 / 8 -- check for TA-COMMENTS
End of explanation
"""
#With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50
#that are playable in the USA (or the country of your choice), along with their popularity score.
count =0
for artist in Lil_artists:
count += 1
print(count,".", artist['name'],"has the popularity of", artist['popularity'])
"""
Explanation: 1.Searching and Printing a List of 50 'Lil' Musicians
With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.
End of explanation
"""
# What genres are most represented in the search results? Edit your previous printout to also display a list of their genres
#in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
#Tip: "how to join a list Python" might be a helpful search
# if len(artist['genres']) == 0 )
# print ("no genres")
# else:
# genres = ", ".join(artist['genres'])
genre_list = []
genre_loop = Lil_data['artists']['items']
for item in genre_loop:
#print(item['genres'])
item_gen = item['genres']
for i in item_gen:
genre_list.append(i)
#print(sorted(genre_list))
#COUNTING the most
genre_counter = {}
for word in genre_list:
if word in genre_counter:
genre_counter[word] += 1
else:
genre_counter[word] = 1
popular_genre = sorted(genre_counter, key = genre_counter.get, reverse = True)
top_genre = popular_genre[:1]
print("The genre most represented is", top_genre)
#COUNTING the most with count to confirm
from collections import Counter
count = Counter(genre_list)
most_count = count.most_common(1)
print("The genre most represented and the count are", most_count)
print("-----------------------------------------------------")
for artist in Lil_artists:
num_genres = 'no genres listed'
if len(artist['genres']) > 0:
num_genres= str.join(',', (artist['genres']))
print(artist['name'],"has the popularity of", artist['popularity'], ", and has", num_genres, "under genres")
"""
Explanation: 2 Genres Most Represented in the Search Results
What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
End of explanation
"""
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
#Lil_data
"""
Explanation: More Spotify - LIL' GRAPHICS
Use Excel, Illustrator or something like https://infogr.am/ to make a graphic about the Lil's, or the Lil's vs. the Biggies.
Just a simple bar graph of their various popularities sounds good to me.
Link to the Line Graph of Lil's Popularity chart
Lil Popularity Graph
End of explanation
"""
#Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
#Is it the same artist who has the largest number of followers?
name_highest = ""
name_follow =""
second_high_pop = 0
highest_pop = 0
high_follow = 0
for artist in Lil_artists:
if (highest_pop < artist['popularity']) & (artist['name'] != "Lil Wayne"):
#second_high_pop = highest_pop
#name_second = artist['name']
highest_pop = artist['popularity']
name_highest = artist['name']
if (high_follow < artist['followers']['total']):
high_follow = artist ['followers']['total']
name_follow = artist['name']
#print(artist['followers']['total'])
print(name_highest, "has the second highest popularity, which is", highest_pop)
print(name_follow, "has the highest number of followers:", high_follow)
#print("the second highest popularity is", second_high_pop)
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
#Lil_data
"""
Explanation: The Second Highest Popular Artist
Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. Is it the same artist who has the largest number of followers?
End of explanation
"""
Lil_artists = Lil_data['artists']['items']
#Print a list of Lil's that are more popular than Lil' Kim.
count = 0
for artist in Lil_artists:
if artist['popularity'] > 62:
count+=1
print(count, artist['name'],"has the popularity of", artist['popularity'])
#else:
#print(artist['name'], "is less popular with a score of", artist['popularity'])
"""
Explanation: 4. List of Lil's Popular Than Lil' Kim
End of explanation
"""
response = requests.get("https://api.spotify.com/v1/search?query=Lil&type=artist&limit=2&country=US")
data = response.json()
for artist in Lil_artists:
#print(artist['name'],artist['id'])
if artist['name'] == "Lil Wayne":
wayne = artist['id']
print(artist['name'], "id is",wayne)
if artist['name'] == "Lil Yachty":
yachty = artist['id']
print(artist['name'], "id is", yachty)
#Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
#Tip: You're going to be making two separate requests, be sure you DO NOT save them into the same variable.
response = requests.get("https://api.spotify.com/v1/artists/" +wayne+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
print("Lil Wayne's top tracks are: ")
for track in tracks:
print("-", track['name'])
print("-----------------------------------------------")
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
print("Lil Yachty 's top tracks are: ")
for track in tracks:
print("-", track['name'])
"""
Explanation: 5.Two Favorite Lils and Their Top Tracks
End of explanation
"""
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
#print(tracks)
#for track in tracks:
#print(track.keys())
#Get an average popularity for their explicit songs vs. their non-explicit songs.
#How many minutes of explicit songs do they have? Non-explicit?
# How explicit is Lils?
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
# counter for tracks for explicit and clean
track_count = 0
clean_count = 0
#counter to find avg popularity
popular_exp = 0
popular_clean = 0
#counter for avg time in minutes are below:
timer = 0
data_timer = 0
timer_clean = 0
for track in tracks:
print("The track,", track['name'],", with the id",track['id'], "is", track['explicit'],"for explicit content, and has the popularity of", track['popularity'])
track_id = track['id']
time_ms = track['duration_ms']
# TA-COMMENT: (-1) If what is true? "if True" will always evaluate to True....
if True:
track_count = track_count + 1
popular_exp = popular_exp + track['popularity'] # TA-COMMENT: What is this supposed to capture?
# It HAPPENS to be the case that all the tracks are explicit, but if that were not true, would this be correct?
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_track = response.json()
print("and has the duration of", data_track['duration_ms'], "milli seconds.")
timer = timer + time_ms
timer_minutes = ((timer / (1000*60)) % 60)
if not track['explicit']:
clean_count = clean_count + 1
popular_clean = popular_clean + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_tracks = response.json()
timer_clean = timer_clean + time_ms
timer_minutes_clean = ((data_timer / (1000*60)) % 60)
print(", and has the duration of", timer_minutes_clean, "minutes")
print("------------------------------------")
avg_pop = popular_exp / track_count
print("I have found", track_count, "tracks, and has the average popularity of", avg_pop, "and has the average duration of", timer_minutes,"minutes and", clean_count, "are clean")
#print("Overall, I discovered", track_count, "tracks")
#print("And", clean_count, "were non-explicit")
#print("Which means", , " percent were clean for Lil Wayne")
# TA-COMMENT: example of what happens if you do just "if True" as in the code above.
if True:
print("hello")
# TA-COMMENT: Same commends apply here.
#Get an average popularity for their explicit songs vs. their non-explicit songs.
#How many minutes of explicit songs do they have? Non-explicit?
# How explicit is Lils?
response = requests.get("https://api.spotify.com/v1/artists/" +wayne+ "/top-tracks?country=US")
data = response.json()
# counter for tracks for explicit and clean
track_count = 0
clean_count = 0
#counter to find avg popularity
popular_exp = 0
popular_clean = 0
#counter for avg time in minutes are below:
timer = 0
#data_timer = 0
timer_clean = 0
for track in tracks:
print("The track,", track['name'],", with the id",track['id'], "is", track['explicit'],"for explicit content, and has the popularity of", track['popularity'])
track_id = track['id']
time_ms = data_track['duration_ms']
if True:
track_count = track_count + 1
popular_exp = popular_exp + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_track = response.json()
print("and has the duration of", data_track['duration_ms'], "milli seconds.")
timer = timer + time_ms
timer_minutes = ((timer / (1000*60)) % 60)
if not track['explicit']:
clean_count = clean_count + 1
popular_clean = popular_clean + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_tracks = response.json()
timer_clean = timer_clean + time_ms
timer_minutes_clean = ((data_timer / (1000*60)) % 60)
print(", and has the duration of", timer_minutes_clean, "minutes")
print("------------------------------------")
avg_pop = popular_exp / track_count
print("I have found", track_count, "tracks, and has the average popularity of", avg_pop, "and has the average duration of", timer_minutes,"minutes and", clean_count, "are clean")
#print("Overall, I discovered", track_count, "tracks")
#print("And", clean_count, "were non-explicit")
#print("Which means", , " percent were clean for Lil Wayne")
"""
Explanation: 6. Average Popularity of My Fav Musicians (Above) for Their explicit songs vs. their non-explicit songs
Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?
End of explanation
"""
#How many total "Biggie" artists are there? How many total "Lil"s?
#If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
biggie_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&country=US')
biggie_data = biggie_response.json()
biggie_artists = biggie_data['artists']['total']
print("Total number of Biggie artists are", biggie_artists)
lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&country=US')
lil_data = lil_response.json()
lil_artists = lil_data['artists']['total']
print("Total number of Lil artists are", lil_artists)
"""
Explanation: 7a. Number of Biggies and Lils
Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
End of explanation
"""
#If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
limit_download = 50
biggie_artists = biggie_data['artists']['total']
Lil_artist = Lil_data['artists']['total']
#1n 5 sec = 50
#in 1 sec = 50 / 5 req = 10 no, for 1 no, 1/10 sec
# for 4501 = 4501/10 sec
# for 49 49/ 10 sec
big_count = biggie_artists/10
lil_count = Lil_artist / 10
print("It would take", big_count, "seconds for Biggies, where as it would take", lil_count,"seconds for Lils" )
# TA-COMMENT: (-1) If one request takes 5 seconds, then 50 requests would take (50 * 5) seconds
# (one request for each 'Biggie')
# So, 4510 Lil artists would take 4510 * 5 seconds
"""
Explanation: 7b. Time to Download All Information on Lil and Biggies
End of explanation
"""
#Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average?
biggie_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50&country=US')
biggie_data = biggie_response.json()
biggie_artists = biggie_data['artists']['items']
big_count_pop = 0
for artist in biggie_artists:
#count_pop = artist['popularity']
big_count_pop = big_count_pop + artist['popularity']
print("Biggie has a total popularity of ", big_count_pop)
big_pop = big_count_pop / 49
print("Biggie is on an average", big_pop,"popular")
#Lil
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
Lil_artists = Lil_data['artists']['items']
lil_count_pop = 0
for artist in Lil_artists:
count_pop_lil = artist['popularity']
lil_count_pop = lil_count_pop + count_pop_lil
lil_pop = lil_count_pop / 50
print("Lil is on an average", lil_pop,"popular")
"""
Explanation: 8. Highest Average Popular Lils and Biggies Out of The Top 50
End of explanation
"""
|
ZwickyTransientFacility/simsurvey-examples
|
skymap_demo.ipynb
|
bsd-3-clause
|
import os
home_dir = os.getcwd()
# Please enter the path to where you have placed the Schlegel, Finkbeiner & Davis (1998) dust map files
# You can also set the environment variable SFD_DIR to this path (in that case the variable below should be None)
sfd98_dir = os.path.join(home_dir, 'data/sfd98')
import simsurvey
import numpy as np
import matplotlib.pyplot as plt
import sncosmo
from astropy.cosmology import Planck15
import simsurvey_tools as sst
from scipy.interpolate import RectBivariateSpline as Spline2d
from astropy.utils.data import download_file
from ligo.skymap.io import fits
from astropy.coordinates import Distance
from ligo.skymap.distance import parameters_to_marginal_moments
from astropy import units as u
import ligo.skymap.plot
simsurvey.__version__
# Load the ZTF fields, CCD corners and filters
fields = sst.load_ztf_fields()
sst.load_ztf_filters()
# Load the ZTF CCD corners
ccds = sst.load_ztf_ccds()
# Load the ZTF quadrants corners
ccds = sst.load_ztf_ccds(filename='data/ZTF_corners_rcid.txt', num_segs=64)
"""
Explanation: Simsurvey demo with skymap
End of explanation
"""
obs = {'time': [], 'field': [], 'band': [], 'maglim': [], 'skynoise': [], 'comment': [], 'zp': []}
mjd_start = 58239.5
for k in range(0, 61, 3):
obs['time'].extend([mjd_start + k + l/24. for l in range(3)])
obs['field'].extend([458 for l in range(3)])
obs['band'].extend(['ztfg', 'ztfr', 'ztfi'])
obs['maglim'].extend([22 for l in range(3)])
obs['zp'].extend([30 for l in range(3)])
obs['comment'].extend(['' for l in range(3)])
obs['skynoise'] = 10**(-0.4 * (np.array(obs['maglim']) - 30)) / 5
plan = simsurvey.SurveyPlan(time=obs['time'],
band=obs['band'],
skynoise=obs['skynoise'],
obs_field=obs['field'],
obs_ccd=None,
zp=obs['zp'],
comment=obs['comment'],
fields=fields,
ccds=ccds
)
mjd_range = (plan.pointings['time'].min() - 30, plan.pointings['time'].max() + 30)
plan.pointings
"""
Explanation: Create a survey plan
End of explanation
"""
! git clone https://github.com/mbulla/kilonova_models.git
def Bullamodel(dynwind=False, dataDir='kilonova_models/02_Dhawan2019/', mej=0.04, phi=30, temp=5000):
l = dataDir+'nph1.0e+06_mej'+'{:.2f}'.format(mej)+'_phi'+'{:.0f}'.format(phi)+'_T'+'{:.1e}'.format(temp)+'.txt'
f = open(l)
lines = f.readlines()
nobs = int(lines[0])
nwave = float(lines[1])
line3 = (lines[2]).split(' ')
ntime = int(line3[0])
t_i = float(line3[1])
t_f = float(line3[2])
cos_theta = np.linspace(0, 1, nobs) # 11 viewing angles
phase = np.linspace(t_i, t_f, ntime) # epochs
file_ = np.genfromtxt(l, skip_header=3)
wave = file_[0:int(nwave),0]
flux = []
for i in range(int(nobs)):
flux.append(file_[i*int(nwave):i*int(nwave)+int(nwave),1:])
flux = np.array(flux).T
return phase, wave, cos_theta, flux
# AngularTimeSeriesSource classdefined to create an angle dependent time serie source.
class AngularTimeSeriesSource(sncosmo.Source):
"""A single-component spectral time series model.
The spectral flux density of this model is given by
.. math::
F(t, \lambda) = A \\times M(t, \lambda)
where _M_ is the flux defined on a grid in phase and wavelength
and _A_ (amplitude) is the single free parameter of the model. The
amplitude _A_ is a simple unitless scaling factor applied to
whatever flux values are used to initialize the
``TimeSeriesSource``. Therefore, the _A_ parameter has no
intrinsic meaning. It can only be interpreted in conjunction with
the model values. Thus, it is meaningless to compare the _A_
parameter between two different ``TimeSeriesSource`` instances with
different model data.
Parameters
----------
phase : `~numpy.ndarray`
Phases in days.
wave : `~numpy.ndarray`
Wavelengths in Angstroms.
cos_theta: `~numpy.ndarray`
Cosine of
flux : `~numpy.ndarray`
Model spectral flux density in erg / s / cm^2 / Angstrom.
Must have shape ``(num_phases, num_wave, num_cos_theta)``.
zero_before : bool, optional
If True, flux at phases before minimum phase will be zeroed. The
default is False, in which case the flux at such phases will be equal
to the flux at the minimum phase (``flux[0, :]`` in the input array).
name : str, optional
Name of the model. Default is `None`.
version : str, optional
Version of the model. Default is `None`.
"""
_param_names = ['amplitude', 'theta']
param_names_latex = ['A', r'\theta']
def __init__(self, phase, wave, cos_theta, flux, zero_before=True, zero_after=True, name=None,
version=None):
self.name = name
self.version = version
self._phase = phase
self._wave = wave
self._cos_theta = cos_theta
self._flux_array = flux
self._parameters = np.array([1., 0.])
self._current_theta = 0.
self._zero_before = zero_before
self._zero_after = zero_after
self._set_theta()
def _set_theta(self):
logflux_ = np.zeros(self._flux_array.shape[:2])
for k in range(len(self._phase)):
adding = 1e-10 # Here we are adding 1e-10 to avoid problems with null values
f_tmp = Spline2d(self._wave, self._cos_theta, np.log(self._flux_array[k]+adding),
kx=1, ky=1)
logflux_[k] = f_tmp(self._wave, np.cos(self._parameters[1]*np.pi/180)).T
self._model_flux = Spline2d(self._phase, self._wave, logflux_, kx=1, ky=1)
self._current_theta = self._parameters[1]
def _flux(self, phase, wave):
if self._current_theta != self._parameters[1]:
self._set_theta()
f = self._parameters[0] * (np.exp(self._model_flux(phase, wave)))
if self._zero_before:
mask = np.atleast_1d(phase) < self.minphase()
f[mask, :] = 0.
if self._zero_after:
mask = np.atleast_1d(phase) > self.maxphase()
f[mask, :] = 0.
return f
phase, wave, cos_theta, flux = Bullamodel()
source = AngularTimeSeriesSource(phase, wave, cos_theta, flux)
dust = sncosmo.CCM89Dust()
model = sncosmo.Model(source=source,effects=[dust, dust], effect_names=['host', 'MW'], effect_frames=['rest', 'obs'])
# Distribution of viewing angles
thetadist = 'uniform in cosine' # 'uniform in cosine', 'uniform in degrees', 'fixed theta'
def random_parameters(redshifts, model,r_v=2., ebv_rate=0.11,**kwargs):
# Amplitude
amp = []
for z in redshifts:
amp.append(10**(-0.4*Planck15.distmod(z).value))
if thetadist=='uniform in cosine':
theta = np.arccos(np.random.random(len(redshifts))) / np.pi * 180
elif thetadist=='uniform in degrees':
theta = np.random.uniform(0, 90,size=len(redshifts))
elif thetadist=='fixed theta':
theta = np.array([20]*len(redshifts)) # Viewing angle fixed to 20 degrees
return {
'amplitude': np.array(amp),
'theta': theta,
'hostr_v': r_v * np.ones(len(redshifts)),
'hostebv': np.random.exponential(ebv_rate, len(redshifts))
}
"""
Explanation: Transient model
In this example the transient is created using models from https://github.com/mbulla/kilonova_models
End of explanation
"""
transientprop = dict(lcmodel=model, lcsimul_func=random_parameters)
"""
Explanation: Transient Generator
End of explanation
"""
ntransient = 5000 # Set to a number if you want to fix the number of generated transients. ntransient = None, follows the rate.
rate = 1000 * 1e-6 # Mpc-3 yr-1
"""
Explanation: Number of injections, you can fix the number of generate
End of explanation
"""
url = ('https://gracedb.ligo.org/apiweb/superevents/S190425z/files/LALInference.fits.gz')
# This is the publication LALInference localization
filename = download_file(url, cache=True)
# If you want to include a skymap
skymap, metadata = fits.read_sky_map(filename, nest=False, distances=True)
map_struct = {}
map_struct["prob"] = skymap[0]
map_struct["distmu"] = skymap[1]
map_struct["distsigma"] = skymap[2]
map_struct["distnorm"] = skymap[3]
distmean, diststd = parameters_to_marginal_moments(map_struct["prob"],
map_struct["distmu"],
map_struct["distsigma"])
distance = Distance(distmean * u.Mpc)
distance_lower = Distance((distmean - 3*diststd) * u.Mpc)
distance_upper = Distance((distmean + 3*diststd) * u.Mpc)
z_min = distance_lower.z
z_max = distance_upper.z
tr = simsurvey.get_transient_generator([z_min, z_max],
ntransient=ntransient,
ratefunc=lambda z: rate,
dec_range=(-90,90),
ra_range=(0,360),
mjd_range=(mjd_range[0],
mjd_range[1]),
transientprop=transientprop,
sfd98_dir=sfd98_dir,
skymap=map_struct
)
survey = simsurvey.SimulSurvey(generator=tr, plan=plan)
lcs = survey.get_lightcurves(
progress_bar=True, notebook=True # If you get an error because of the progress_bar, delete this line.
)
len(lcs.lcs)
"""
Explanation: Simsurvey by default distribute transients uniformly in RA and DEC. If you wish to use a skymap, then you can use skymap in simsurvey.get_transient_generator :
End of explanation
"""
lcs.save('lcs_skymap.pkl')
"""
Explanation: Save
End of explanation
"""
_ = sncosmo.plot_lc(lcs[0])
plt.figure()
ax = plt.axes()
ax.grid()
ax.scatter(lcs.meta_notobserved['ra'], lcs.meta_notobserved['dec'], marker='*', label='meta_notobserved', alpha=0.7)
ax.scatter(lcs.meta_rejected['ra'], lcs.meta_rejected['dec'], marker='*', label='meta_rejected', alpha=0.7)
ax.scatter(lcs.meta['ra'], lcs.meta['dec'], marker='*', label='meta_detected', alpha=0.7)
#ax.legend(loc='center left', bbox_to_anchor=(0.9, .5))
ax.legend(loc=0)
ax.set_ylabel('DEC (deg)')
ax.set_xlabel('RA (deg)')
plt.tight_layout()
plt.show()
plt.figure()
ax = plt.axes(
[0.05, 0.05, 0.9, 0.9],
projection='geo degrees mollweide'
)
ax.grid()
sorted_prob = np.flip(sorted(map_struct['prob']))
def ci(level, sorted_prob=sorted_prob):
csum = 0
c = 0
index = 0
while csum < level:
csum += sorted_prob[index]
c = sorted_prob[index]
index += 1
return csum, c, index
c90sum, c90, index = ci(.9)
c50sum, c50, index = ci(.5)
levels = [c90, c50]
cs = ax.contour_hpx(map_struct['prob'], levels=levels, colors='black')
fmt = {}
strs = ['90%', '50%']
for l, s in zip(cs.levels, strs):
fmt[l] = s
ax.clabel(cs,fmt=fmt)
ax.scatter(lcs.meta_notobserved['ra'], lcs.meta_notobserved['dec'], transform=ax.get_transform('world'), marker='*', label='meta_notobserved', alpha=0.7)
ax.scatter(lcs.meta_rejected['ra'], lcs.meta_rejected['dec'], transform=ax.get_transform('world'), marker='*', label='meta_rejected', alpha=0.7)
ax.scatter(lcs.meta['ra'], lcs.meta['dec'], transform=ax.get_transform('world'), marker='*', label='meta_detected', alpha=0.7)
ax.legend(loc=0)
ax.set_ylabel('DEC (deg)')
ax.set_xlabel('RA (deg)')
plt.tight_layout()
plt.show()
"""
Explanation: Output
lcs.lcs contains the detected lightcurves
lcs.meta contains parameters for the detected lightcurves
lcs.meta_full contains parameters for all the injection within the observed area.
lcs.meta_rejected contains parameters for all the injection within the observed area that were not detected.
lcs.meta_notobserved contains parameters for all the injection outside the observed area.
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.3/tutorials/beaming_boosting.ipynb
|
gpl-3.0
|
#!pip install -I "phoebe>=2.3,<2.4"
"""
Explanation: Beaming and Boosting
Due to concerns about accuracy, support for Beaming & Boosting has been disabled as of the 2.2 release of PHOEBE (although we hope to bring it back in a future release).
It may come as surprise that support for Doppler boosting has been dropped in PHOEBE 2.2. This document details the underlying causes for that decision and explains the conditions that need to be met for boosting to be re-incorporated into PHOEBE.
Let's start by reviewing the theory behind Doppler boosting. The motion of the stars towards or away from the observer changes the amount of received flux due to three effects:
the spectrum is Doppler-shifted, so the flux, being the passband-weighted integral of the spectrum, changes;
the photons' arrival rate changes due to time dilation; and
radiation is beamed in the direction of motion due to light aberration.
It turns out that the combined boosting signal can be written as:
$$ I_\lambda = I_{\lambda,0} \left( 1 - B(\lambda) \frac{v_r}c \right), $$
where $I_{\lambda,0}$ is the intrinsic (rest-frame) passband intensity, $I_\lambda$ is the boosted passband intensity, $v_r$ is radial velocity, $c$ is the speed of light and $B(\lambda)$ is the boosting index:
$$ B(\lambda) = 5 + \frac{\mathrm{d}\,\mathrm{ln}\, I_\lambda}{\mathrm{d}\,\mathrm{ln}\, \lambda}. $$
The term $\mathrm{d}(\mathrm{ln}\, I_\lambda) / \mathrm{d}(\mathrm{ln}\, \lambda)$ is called spectral index. As $I_\lambda$ depends on $\lambda$, we average it across the passband:
$$ B_\mathrm{pb} = \frac{\int_\lambda \mathcal{P}(\lambda) \mathcal S(\lambda) B(\lambda) \mathrm d\lambda}{\int_\lambda \mathcal{P}(\lambda) \mathcal S(\lambda) \mathrm d\lambda}. $$
In what follows we will code up these steps and demonstrate the inherent difficulty of realizing a robust, reliable treatment of boosting.
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import fits
"""
Explanation: Import all python modules that we'll need:
End of explanation
"""
wl = np.arange(900., 39999.501, 0.5)/1e10
with fits.open('T06000G40P00.fits') as hdu:
Imu = 1e7*hdu[0].data
"""
Explanation: Pull a set of Sun-like emergent intensities as a function of $\mu = \cos \theta$ from the Castelli and Kurucz database of model atmospheres (the necessary file can be downloaded from here):
End of explanation
"""
Inorm = Imu[-1,:]
"""
Explanation: Grab only the normal component for testing purposes:
End of explanation
"""
pb = phoebe.get_passband('Johnson:V')
"""
Explanation: Now let's load a Johnson V passband and the transmission function $P(\lambda)$ contained within:
End of explanation
"""
keep = (wl >= pb.ptf_table['wl'][0]) & (wl <= pb.ptf_table['wl'][-1])
Inorm = Inorm[keep]
wl = wl[keep]
"""
Explanation: Tesselate the wavelength interval to the range covered by the passband:
End of explanation
"""
plt.plot(wl, Inorm*pb.ptf(wl), 'b-')
plt.show()
"""
Explanation: Calculate $S(\lambda) P(\lambda)$ and plot it, to make sure everything so far makes sense:
End of explanation
"""
lnwl = np.log(wl)
lnI = np.log(Inorm)
plt.xlabel(r'$\mathrm{ln}\,\lambda$')
plt.ylabel(r'$\mathrm{ln}\,I_\lambda$')
plt.plot(lnwl, lnI, 'b-')
plt.show()
"""
Explanation: Now let's compute the term $\mathrm{d}(\mathrm{ln}\, I_\lambda) / \mathrm{d}(\mathrm{ln}\, \lambda)$. First we will compute $\mathrm{ln}\,\lambda$ and $\mathrm{ln}\,I_\lambda$ and plot them:
End of explanation
"""
envelope = np.polynomial.legendre.legfit(lnwl, lnI, 5)
continuum = np.polynomial.legendre.legval(lnwl, envelope)
diff = lnI-continuum
sigma = np.std(diff)
clipped = (diff > -sigma)
while True:
Npts = clipped.sum()
envelope = np.polynomial.legendre.legfit(lnwl[clipped], lnI[clipped], 5)
continuum = np.polynomial.legendre.legval(lnwl, envelope)
diff = lnI-continuum
clipped = clipped & (diff > -sigma)
if clipped.sum() == Npts:
break
plt.xlabel(r'$\mathrm{ln}\,\lambda$')
plt.ylabel(r'$\mathrm{ln}\,I_\lambda$')
plt.plot(lnwl, lnI, 'b-')
plt.plot(lnwl, continuum, 'r-')
plt.show()
"""
Explanation: Per equation above, $B(\lambda)$ is then the slope of this curve (plus 5). Herein lies the problem: what part of this graph do we fit a line to? In versions 2 and 2.1, PHOEBE used a 5th order Legendre polynomial to fit the spectrum and then sigma-clipping to get to the continuum. Finally, it computed an average derivative of that Legendrian and proclaimed that $B(\lambda)$. The order of the Legendre polynomial and the values of sigma for sigma-clipping have been set ad-hoc and kept fixed for every single spectrum.
End of explanation
"""
dlnwl = lnwl[1:]-lnwl[:-1]
dlnI = lnI[1:]-lnI[:-1]
B = dlnI/dlnwl
plt.plot(0.5*(wl[1:]+wl[:-1]), B, 'b-')
plt.show()
"""
Explanation: It is clear that there is a pretty strong systematics here that we sweep under the rug. Thus, we need to revise the way we compute the spectral index and make it robust before we claim that we support boosting.
For fun, this is what would happen if we tried to estimate $B(\lambda)$ at each $\lambda$:
End of explanation
"""
|
kerimlcr/ab2017-dpyo
|
ornek/osmnx/osmnx-0.3/examples/08-example-line-graph.ipynb
|
gpl-3.0
|
import osmnx as ox, networkx as nx, matplotlib.cm as cm, matplotlib.colors as colors
%matplotlib inline
ox.config(log_console=True, use_cache=True)
"""
Explanation: Street network analysis when a street is a node
In some traditions of street network research, street becomes a node. The edges are connected when these streets touch each other. In OSMnx, basically, the intersections of streets become nodes and streets become edges. Therefore, in order to build a network according to the previous tradition, you have to modify the network that OSMnx generates properly. Fortunately, NetworkX has line_graph() function that does exactly that operation. This example demonstrates the process of transforming a city's street network into a line graph using OSMnx and calculating and coloring the closeness centrality for this line graph.
End of explanation
"""
# get the network and project it
G = ox.graph_from_place('Piedmont, California, USA', network_type='drive')
G_projected = ox.project_graph(G)
# plot it: nodes in this network are street junctions
fig, ax = ox.plot_graph(G_projected, node_size=8, node_color='k', node_zorder=2,
edge_color='gray', edge_linewidth=0.5, edge_alpha=0.8)
"""
Explanation: Create a street network using OSMnx
End of explanation
"""
# NetworkX's line_graph() creates a line graph
L=nx.line_graph(G_projected)
# calculate closeness of each street from the line graph
closeness = nx.closeness_centrality(L)
"""
Explanation: Let's convert the graph into a line graph so that the streets become nodes in the new graph.
End of explanation
"""
# list of edge values for the orginal graph
ev = [closeness[edge + (0,)] for edge in G_projected.edges()]
# color scale converted to list of colors for graph edges
norm = colors.Normalize(vmin=min(ev), vmax=max(ev))
cmap = cm.ScalarMappable(norm=norm, cmap=cm.viridis)
ec= [cmap.to_rgba(cl) for cl in ev]
# color the edges in the original graph with closeness centralities in the line graph
fig, ax = ox.plot_graph(G_projected, node_size=5, node_color='k', node_zorder=2,
edge_color=ec, edge_linewidth=1.5, edge_alpha=1)
"""
Explanation: We are going to color streets in the original graph with their closeness centralities.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.13/_downloads/plot_channel_epochs_image.ipynb
|
bsd-3-clause
|
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
"""
Explanation: Visualize channel over epochs as an image
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
2 images are produced. One with a good channel and one with a channel
that does not see any evoked field.
It is also demonstrated how to reorder the epochs using a 1d spectral
embedding as described in:
Graph-based variability estimation in single-trial event-related neural
responses A. Gramfort, R. Keriven, M. Clerc, 2010,
Biomedical Engineering, IEEE Trans. on, vol. 57 (5), 1051-1061
https://hal.inria.fr/inria-00497023
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.5
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
"""
Explanation: Set parameters
End of explanation
"""
# and order with spectral reordering
# If you don't have scikit-learn installed set order_func to None
from sklearn.cluster.spectral import spectral_embedding # noqa
from sklearn.metrics.pairwise import rbf_kernel # noqa
def order_func(times, data):
this_data = data[:, (times > 0.0) & (times < 0.350)]
this_data /= np.sqrt(np.sum(this_data ** 2, axis=1))[:, np.newaxis]
return np.argsort(spectral_embedding(rbf_kernel(this_data, gamma=1.),
n_components=1, random_state=0).ravel())
good_pick = 97 # channel with a clear evoked response
bad_pick = 98 # channel with no evoked response
plt.close('all')
mne.viz.plot_epochs_image(epochs, [good_pick, bad_pick], sigma=0.5, vmin=-100,
vmax=250, colorbar=True, order=order_func, show=True)
"""
Explanation: Show event related fields images
End of explanation
"""
|
mbakker7/ttim
|
notebooks/ttim_slugtest.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fmin
import pandas as pd
from ttim import *
# problem definitions
rw = 0.125 # well radius
rc = 0.064 # well casing radius
L = 1.52 # screen length
zbot = -47.87 # aquifer thickness
welltop = -16.77 # top of screen
delh = 0.671 # slug displacement in the well
#
wellbot = welltop - L # bottom of screen
Q = np.pi * rc**2 * delh # volume of slug
# loading data
data = np.loadtxt('data/slugtest.txt') # time and drawdouwn
time, dd = data[:,0], data[:,1]
td = time/60/60/24 #time in days
print('minimum and maximum time:', td.min(), td.max())
dd
"""
Explanation: Slug test analysis in an unconfined aquifer
The data is taken from the AQTESOLVE website.
Butler (1998) presents results from a slug test in a partially penetrating well that is screened in unconsolidated alluvial deposits consisting of sand and gravel with interbedded clay. The aquifer has a thickness $H=47.87$ m. The depth to the top of the well screen is 16.7 m, and the screen of the well is 1.52 m long. The radius of the well is 0.125 m, and the radius of the casing is 0.064 m. The slug displacement is 0.671 m.
End of explanation
"""
ml = Model3D(kaq=100, z=[0, -0.5, welltop, wellbot, zbot],
Saq=1e-4, kzoverkh=1, tmin=1e-6, tmax=0.01)
w = Well(ml, xw=0, yw=0, rw=rw, tsandQ=[(0.0, -Q)],
layers=2, rc=rc, wbstype='slug')
ml.solve()
print('k:', ml.aq.kaq)
print('T: ', ml.aq.T)
print('c: ', ml.aq.c)
cal = Calibrate(ml)
cal.set_parameter(name='kaq0_3', initial=10)
cal.set_parameter(name='Saq0_3', initial=1e-3)
cal.series(name='obs1', x=0, y=0, layer=2, t=td, h=dd)
cal.fit()
print('k:', ml.aq.kaq)
print('T: ', ml.aq.T)
print('c: ', ml.aq.c)
hm = ml.head(0, 0, td, layers=2)
plt.figure(figsize=(12, 6))
plt.semilogx(time, dd / delh, 'ko', label='Observed')
plt.semilogx(time, hm[0] / delh, 'b', label='TTim')
plt.ylim([0, 1])
plt.xlabel('time [s]')
plt.ylabel('h / delh')
plt.legend(loc='best')
plt.title('TTim Slug Test Analysis');
r = pd.DataFrame(columns=['Kr [m/day]','Ss [1/m]'],
index=['TTim', 'AQTESOLV'])
r.loc['TTim'] = cal.parameters['optimal'].values
r.loc['AQTESOLV'] = [4.034, 0.000384]
r
"""
Explanation: Flow is simulated with a quasi three-dimensional model consisting of Nlayers mode layers. The top and bottom of the aquifer are impermeable.
The horizontal hydraulic conductivity $k$ and elastic storage $S_s$ are unkonwn. Phreatic storage and vertical anisotropy are not simulated. The variable p contains the two unknown parameters. The well is modeled with the Well element. The type is specified as slug, adn the initially displaced volume is specified as $Q$.
End of explanation
"""
def sse(p, returnheads=False):
ml = Model3D(kaq=p[0], z=[0, -0.5, welltop, wellbot, zbot],
Saq=p[1], kzoverkh=1, tmin=1e-6, tmax=0.01)
w = Well(ml, xw=0, yw=0, rw=rw, tsandQ=[(0.0, -Q)],
layers=2, rc=rc, wbstype='slug')
ml.solve(silent = '.')
hm = ml.head(0, 0, td, 2)
if returnheads: return hm
se = np.sum((hm[0] - dd)**2)
return se
popt = fmin(sse, [3, 1e-4])
print('optimal parameters:', popt)
print('sse:', sse(popt))
"""
Explanation: Verify with fmin
End of explanation
"""
|
OceanPARCELS/parcels
|
parcels/examples/tutorial_Agulhasparticles.ipynb
|
mit
|
from parcels import FieldSet, ParticleSet, JITParticle, AdvectionRK4, ErrorCode
from datetime import timedelta
import numpy as np
"""
Explanation: Tutorial showing how to create Parcels in Agulhas animated gif
This brief tutorial shows how to recreate the animated gif showing particles in the Agulhas region south of Africa.
We start with importing the relevant modules
End of explanation
"""
filenames = "GlobCurrent_example_data/20*.nc"
variables = {'U': 'eastward_eulerian_current_velocity',
'V': 'northward_eulerian_current_velocity'}
dimensions = {'lat': 'lat',
'lon': 'lon',
'time': 'time'}
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions)
"""
Explanation: Now load the Globcurrent fields from the GlobCurrent_example_data directory (note that unlike in the main Parcels tutorial we don't use a dictionary for the filenames here; as they are the same for all variables, we don't need to)
End of explanation
"""
lons, lats = np.meshgrid(range(15, 35), range(-40, -30))
pset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=lons, lat=lats)
"""
Explanation: Now create vectors of Longitude and Latitude starting locations on a regular mesh, and use these to initialise a ParticleSet object.
End of explanation
"""
def DeleteParticle(particle, fieldset, time):
particle.delete()
"""
Explanation: Now we want to advect the particles. However, the Globcurrent data that we loaded in is only for a limited, regional domain and particles might be able to leave this domain. We therefore need to tell Parcels that particles that leave the domain need to be deleted. We do that using a Recovery Kernel, which will be invoked when a particle encounters an ErrorOutOfBounds error:
End of explanation
"""
for cnt in range(3):
# First plot the particles
pset.show(savefile='particles'+str(cnt).zfill(2), field='vector', land=True, vmax=2.0)
# Then advect the particles for 6 hours
pset.execute(AdvectionRK4,
runtime=timedelta(hours=6), # runtime controls the interval of the plots
dt=timedelta(minutes=5),
recovery={ErrorCode.ErrorOutOfBounds: DeleteParticle}) # the recovery kernel
"""
Explanation: Now we can advect the particles. Note that we do this inside a for-loop, so we can save a plot every six hours (which is the value of runtime). See the plotting tutorial for more information on the pset.show() method.
End of explanation
"""
|
robertoalotufo/ia898
|
src/pconv.ipynb
|
mit
|
def pconv(f,h):
import numpy as np
h_ind=np.nonzero(h)
f_ind=np.nonzero(f)
if len(h_ind[0])>len(f_ind[0]):
h, f = f, h
h_ind,f_ind= f_ind,h_ind
gs = np.maximum(np.array(f.shape),np.array(h.shape))
if (f.dtype == 'complex') or (h.dtype == 'complex'):
g = np.zeros(gs,dtype='complex')
else:
g = np.zeros(gs)
f1 = g.copy()
f1[f_ind]=f[f_ind]
if f.ndim == 1:
(W,) = gs
col = np.arange(W)
for cc in h_ind[0]:
g[:] += f1[(col-cc)%W] * h[cc]
elif f.ndim == 2:
H,W = gs
row,col = np.indices(gs)
for rr,cc in np.transpose(h_ind):
g[:] += f1[(row-rr)%H, (col-cc)%W] * h[rr,cc]
else:
Z,H,W = gs
d,row,col = np.indices(gs)
for dd,rr,cc in np.transpose(h_ind):
g[:] += f1[(d-dd)%Z, (row-rr)%H, (col-cc)%W] * h[dd,rr,cc]
return g
"""
Explanation: pconv - Periodic convolution, kernel origin at array origin
Synopse
1D, 2D or 3D Periodic convolution. (kernel origin at array origin)
g = pconv(f, h)
g: Image. Output image.
f: Image. Input image.
h: Image. PSF (point spread function), or kernel. The origin is at the array origin.
Description
Perform a 1D, 2D or 3D discrete periodic convolution. The kernel origin is at the origin of image h.
Both image and kernel are periodic with same period. Usually the kernel h is smaller than the image f,
so h is padded with zero until the size of f. Supports complex images.
End of explanation
"""
testing = (__name__ == '__main__')
if testing:
! jupyter nbconvert --to python pconv.ipynb
import numpy as np
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
"""
Explanation: Examples
End of explanation
"""
if testing:
f = np.array([0,0,0,1,0,0,0,0,1])
print("f:",f)
h = np.array([1,2,3])
print("h:",h)
g1 = ia.pconv(f,h)
g2 = ia.pconv(h,f)
print("g1:",g1)
print("g2:",g2)
"""
Explanation: Numerical Example 1D
End of explanation
"""
if testing:
f = np.array([[1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,1],
[0,0,0,0,0,0,0,0,0]])
print("Image (f):")
print(f)
h = np.array([[1,2,3],
[4,5,6]])
print("\n Image Kernel (h):")
print(h)
g1 = ia.pconv(f,h)
print("Image Output (g1=f*h):")
print(g1)
g2 = ia.pconv(h,f)
print("Image Output (g2=h*f):")
print(g)
"""
Explanation: Numerical Example 2D
End of explanation
"""
if testing:
f = np.zeros((3,3,3))
#f[0,1,1] = 1
f[1,1,1] = 1
#f[2,1,1] = 1
print("\n Image Original (F): ")
print(f)
h = np.array([[[ 1, 2, 3 ],
[ 3, 4, 5 ],
[ 5, 6, 7 ]],
[[ 8, 9, 10],
[11, 12, 13],
[14, 15, 16]],
[[17, 18, 19],
[20, 21, 22],
[23, 24, 25]]])
print("\n Image Kernel (H): ")
print(h)
result = ia.pconv(f,h)
print("\n Image Output - (G): ")
print(result)
"""
Explanation: Numerical Example 3D
End of explanation
"""
if testing:
f = mpimg.imread('../data/cameraman.tif')
ia.adshow(f, title = 'a) - Original Image')
h = np.array([[-1,-1,-1],
[ 0, 0, 0],
[ 1, 1, 1]])
g = ia.pconv(f,h)
print("\nPrewitt´s Mask")
print(h)
gn = ia.normalize(g, [0,255])
ia.adshow(gn, title = 'b) Prewitt´s Mask filtering')
ia.adshow(ia.normalize(abs(g)), title = 'c) absolute of Prewitt´s Mask filtering')
"""
Explanation: Example with Image 2D
End of explanation
"""
|
google-aai/sc17
|
cats/nn_demo_part2.ipynb
|
apache-2.0
|
import numpy as np
# Set up the data and network:
n_outputs = 5 # We're attempting to learn XOR in this example, so our inputs and outputs will be the same.
n_hidden_units = 10 # We'll use a single hidden layer with this number of hidden units in it.
n_obs = 500 # How many observations of the XOR input to output vector will we use for learning?
# How quickly do we want to update our weights?
learning_rate = 0.1
# How many times will we try to use each observation to improve the weights?
epochs = 10 # Think of this as iterations if you like.
# Set random seed so that the exercise works out the same way for everyone:
np.random.seed(42)
"""
Explanation: Basics of Neural Networks - Keras Demo
Author(s): ronbodkin@google.com, kozyr@google.com, bfoo@google.com
We show how to train a very simple neural network from scratch (but let's upgrade from numpy to keras). Keras is a higher-level API that makes TensorFlow easier to work work with.
Setup
Identical to numpy version
End of explanation
"""
import tensorflow as tf
# Which version of TensorFlow are we using?
print(tf.__version__)
# Add keras to runtime
!pip install keras
# Import keras and basic types of NN layers we will use
# Keras is a higher-level API for neural networks that works with TensorFlow
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation
import keras.utils as np_utils
"""
Explanation: Only in keras version
End of explanation
"""
# Create the inputs:
training_vectors = np.random.binomial(1, 0.5, (n_obs, n_outputs))
# Each row is a binary vector to learn from.
print('One instance with ' + str(n_outputs) + ' features: ' + str(training_vectors[0]))
# Create the correct XOR outputs (t is for target):
xor_training_vectors = training_vectors ^ 1 # This is just XOR, everything is deterministic.
print('Correct label (simply XOR): ' + str(xor_training_vectors[0]))
"""
Explanation: Create some data to learn from
Identical to numpy version
End of explanation
"""
# 2 layer model with ReLU for hidden layer, sigmoid for output layer
# Uncomment below to try a 3 layer model with two hidden layers
model = Sequential()
model.add(Dense(units=n_hidden_units, input_dim=n_outputs))
model.add(Activation('relu'))
#model.add(Dense(units=n_hidden_units))
#model.add(Activation('sigmoid'))
model.add(Dense(units=n_outputs))
model.add(Activation('sigmoid'))
# Time to choose an optimizer. Let's use SGD:
sgd = keras.optimizers.SGD(lr=learning_rate, decay=1e-6, momentum=0.9, nesterov=True)
# Set up model using cross-entropy loss with SGD optimizer:
model.compile(optimizer=sgd,
loss='binary_crossentropy',
metrics=['accuracy'])
"""
Explanation: Build the network directly
There's no need to write the loss and activation functions from scratch or compute the their derivatives, or to write forward and backprop from scratch. We'll just select them and keras will take care of it. Thanks, keras!
Only in keras version
End of explanation
"""
# Fit model:
model.fit(training_vectors, xor_training_vectors, epochs=epochs)
"""
Explanation: Train the neural network!
Only in keras version
End of explanation
"""
# Print performance to screen:
def get_performance(n_valid):
"""Computes performance and prints it to screen.
Args:
n_valid: number of validation instances we'd like to simulate.
Returns:
None
"""
flawless_tracker = []
validation_vectors = np.random.binomial(1, 0.5, (n_valid, n_outputs))
xor_validation_vectors = validation_vectors ^ 1
loss_and_metrics = model.evaluate(validation_vectors,
xor_validation_vectors, batch_size=n_valid)
print(loss_and_metrics)
for i in range(n_valid):
predicted = model.predict(np.reshape(validation_vectors[i], (1,-1)), 1)
labels = (predicted > 0.5).astype(int)[0,]
if i < 3:
print('********')
print('Challenge ' + str(i + 1) + ': ' + str(validation_vectors[i]))
print('Predicted ' + str(i + 1) + ': ' + str(labels))
print('Correct ' + str(i + 1) + ': ' + str(xor_validation_vectors[i]))
instance_score = (np.array_equal(labels, xor_validation_vectors[i]))
flawless_tracker.append(instance_score)
print('\nProportion of flawless instances on ' + str(n_valid) +
' new examples: ' + str(round(100*np.mean(flawless_tracker),0)) + '%')
get_performance(5000)
"""
Explanation: Validate
Almost identical to numpy version
The only difference relates to the use of model.predict() and model.evaluate(). See loss_and_metrics and predicted.
End of explanation
"""
|
miaecle/deepchem
|
examples/tutorials/19_Large_Scale_Chemical_Screens.ipynb
|
mit
|
from deepchem.molnet.load_function import hiv_datasets
from deepchem.models import GraphConvModel
from deepchem.data import NumpyDataset
from sklearn.metrics import average_precision_score
import numpy as np
tasks, all_datasets, transformers = hiv_datasets.load_hiv(featurizer="GraphConv")
train, valid, test = [NumpyDataset.from_DiskDataset(x) for x in all_datasets]
model = GraphConvModel(1, mode="classification")
model.fit(train)
y_true = np.squeeze(valid.y)
y_pred = model.predict(valid)[:,0,1]
print("Average Precision Score:%s" % average_precision_score(y_true, y_pred))
sorted_results = sorted(zip(y_pred, y_true), reverse=True)
hit_rate_100 = sum(x[1] for x in sorted_results[:100]) / 100
print("Hit Rate Top 100: %s" % hit_rate_100)
"""
Explanation: Tutorial Part 19: Screening Zinc For HIV Inhibition
In this tutorial we will walk through how to efficiently screen a large compound library with DeepChem (ZINC). Screening a large compound library using machine learning is a CPU bound pleasingly parrellel problem. The actual code examples I will use assume the resources available are a single very big machine (like an AWS c5.18xlarge), but should be readily swappable for other systmes (like a super computing cluster). At a high level what we will do is...
Create a Machine Learning Model Over Labeled Data
Transform ZINC into "Work-Units"
Create an inference script which runs predictions over a "Work-Unit"
Load "Work-Unit" into a "Work Queue"
Consume work units from "Work Queue"
Gather Results
This tutorial is unlike the previous tutorials in that it's designed to be run on AWS rather than on Google Colab. That's because we'll need access to a large machine with many cores to do this computation efficiently. We'll try to provide details about how to do this throughout the tutorial.
1. Train Model On Labelled Data
We are just going to knock out a simple model here. In a real world problem you will probably try several models and do a little hyper parameter searching.
End of explanation
"""
tasks, all_datasets, transformers = hiv_datasets.load_hiv(featurizer="GraphConv", split=None)
model = GraphConvModel(1, mode="classification", model_dir="/tmp/zinc/screen_model")
model.fit(all_datasets[0])
"""
Explanation: Retrain Model Over Full Dataset For The Screen
End of explanation
"""
import os
work_units = os.listdir('/tmp/zinc/screen')
with open('/tmp/zinc/work_queue.sh', 'w') as fout:
fout.write("#!/bin/bash\n")
for work_unit in work_units:
full_path = os.path.join('/tmp/zinc', work_unit)
fout.write("python inference.py %s" % full_path)
"""
Explanation: 2. Create Work-Units
Download All of ZINC15.
Go to http://zinc15.docking.org/tranches/home and download all non-empty tranches in .smi format.
I found it easiest to download the wget script and then run the wget script.
For the rest of this tutorial I will assume zinc was downloaded to /tmp/zinc.
The way zinc downloads the data isn't great for inference. We want "Work-Units" which a single CPU can execute that takes a resonable amount of time (10 minutes to an hour). To accomplish this we are going to split the zinc data into files each with 500 thousand lines.
bash
mkdir /tmp/zinc/screen
find /tmp/zinc -name '*.smi' -exec cat {} \; | grep -iv "smiles" \
| split -l 500000 /tmp/zinc/screen/segment
This bash command
1. Finds all smi files
2. prints to stdout the contents of the file
3. removes header lines
4. splits into multiple files in /tmp/zinc/screen that are 500k molecules long
3. Creat Inference Script
Now that we have work unit we need to construct a program which ingests a work unit and logs the result. It is important that the logging mechanism is thread safe!
For this example we will get the work unit via a file-path, and log the result to a file.
An easy extensions to distribute over multiple computers would be to get the work unit via a url, and log the results to a distributed queue.
Here is what mine looks like
inference.py
```python
import sys
import deepchem as dc
import numpy as np
from rdkit import Chem
import pickle
import os
def create_dataset(fname, batch_size=50000):
featurizer = dc.feat.ConvMolFeaturizer()
fin = open(fname)
mols, orig_lines = [], []
for line in fin:
line = line.strip().split()
try:
mol = Chem.MolFromSmiles(line[0])
if mol is None:
continue
mols.append(mol)
orig_lines.append(line)
except:
pass
if len(mols) > 0 and len(mols) % batch_size == 0:
features = featurizer.featurize(mols)
y = np.ones(shape=(len(mols), 1))
ds = dc.data.NumpyDataset(features, y)
yield ds, orig_lines
mols, orig_lines = [], []
if len(mols) > 0:
features = featurizer.featurize(mols)
y = np.ones(shape=(len(mols), 1))
ds = dc.data.NumpyDataset(features, y)
yield ds, orig_lines
def evaluate(fname):
fout_name = "%s_out.smi" % fname
model = dc.models.TensorGraph.load_from_dir('screen_model')
for ds, lines in create_dataset(fname):
y_pred = np.squeeze(model.predict(ds), axis=1)
with open(fout_name, 'a') as fout:
for index, line in enumerate(lines):
line.append(y_pred[index][1])
line = [str(x) for x in line]
line = "\t".join(line)
fout.write("%s\n" % line)
if name == "main":
evaluate(sys.argv[1])
```
4. Load "Work-Unit" into a "Work Queue"
We are going to use a flat file as our distribution mechanism. It will be a bash script calling our inference script for every work unit. If you are at an academic institution this would be queing your jobs in pbs/qsub/slurm. An option for cloud computing would be rabbitmq or kafka.
End of explanation
"""
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
from IPython.display import SVG
from rdkit.Chem.Draw import rdMolDraw2D
best_mols = [Chem.MolFromSmiles(x.strip().split()[0]) for x in open('/tmp/zinc/screen/top_100k.smi').readlines()[:100]]
best_scores = [x.strip().split()[2] for x in open('/tmp/zinc/screen/top_100k.smi').readlines()[:100]]
print(best_scores[0])
best_mols[0]
print(best_scores[0])
best_mols[1]
print(best_scores[0])
best_mols[2]
print(best_scores[0])
best_mols[3]
"""
Explanation: 5. Consume work units from "distribution mechanism"
We will consume work units from our work queue using a very simple Process Pool. It takes lines from our "Work Queue" and runs them, running as many processes in parrallel as we have cpus. If you are using a supercomputing cluster system like pbs/qsub/slurm it will take care of this for you. The key is to use one CPU per work unit to get highest throughput. We accomplish that here using the linux utility "taskset".
Using an c5.18xlarge on aws this will finish overnight.
process_pool.py
```python
import multiprocessing
import sys
from multiprocessing.pool import Pool
import delegator
def run_command(args):
q, command = args
cpu_id = q.get()
try:
command = "taskset -c %s %s" % (cpu_id, command)
print("running %s" % command)
c = delegator.run(command)
print(c.err)
print(c.out)
except Exception as e:
print(e)
q.put(cpu_id)
def main(n_processors, command_file):
commands = [x.strip() for x in open(command_file).readlines()]
commands = list(filter(lambda x: not x.startswith("#"), commands))
q = multiprocessing.Manager().Queue()
for i in range(n_processors):
q.put(i)
argslist = [(q, x) for x in commands]
pool = Pool(processes=n_processors)
pool.map(run_command, argslist)
if name == "main":
processors = multiprocessing.cpu_count()
main(processors, sys.argv[1])
```
```bash
python process_pool.py /tmp/zinc/work_queue.sh
```
6. Gather Results
Since we logged our results to *_out.smi we now need to gather all of them up and sort them by our predictions. The resulting file wile be > 40GB. To analyze the data further you can use dask, or put the data in a rdkit postgres cartridge.
Here I show how to join the and sort the data to get the "best" results.
```bash
find /tmp/zinc -name '*_out.smi' -exec cat {} \; > /tmp/zinc/screen/results.smi
sort -rg -k 3,3 /tmp/zinc/screen/results.smi > /tmp/zinc/screen/sorted_results.smi
Put the top 100k scoring molecules in their own file
head -n 50000 /tmp/zinc/screen/sorted_results.smi > /tmp/zinc/screen/top_100k.smi
```
/tmp/zinc/screen/top_100k.smi is now a small enough file to investigate using standard tools like pandas.
End of explanation
"""
|
jseabold/statsmodels
|
examples/notebooks/tsa_arma_0.ipynb
|
bsd-3-clause
|
%matplotlib inline
import numpy as np
from scipy import stats
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.tsa.arima.model import ARIMA
from statsmodels.graphics.api import qqplot
"""
Explanation: Autoregressive Moving Average (ARMA): Sunspots data
End of explanation
"""
print(sm.datasets.sunspots.NOTE)
dta = sm.datasets.sunspots.load_pandas().data
dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008'))
del dta["YEAR"]
dta.plot(figsize=(12,8));
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2)
arma_mod20 = ARIMA(dta, order=(2, 0, 0)).fit()
print(arma_mod20.params)
arma_mod30 = ARIMA(dta, order=(3, 0, 0)).fit()
print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic)
print(arma_mod30.params)
print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)
"""
Explanation: Sunspots Data
End of explanation
"""
sm.stats.durbin_watson(arma_mod30.resid.values)
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax = arma_mod30.resid.plot(ax=ax);
resid = arma_mod30.resid
stats.normaltest(resid)
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
fig = qqplot(resid, line='q', ax=ax, fit=True)
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(resid.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)
r,q,p = sm.tsa.acf(resid.values.squeeze(), fft=True, qstat=True)
data = np.c_[range(1,41), r[1:], q, p]
table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"])
print(table.set_index('lag'))
"""
Explanation: Does our model obey the theory?
End of explanation
"""
predict_sunspots = arma_mod30.predict('1990', '2012', dynamic=True)
print(predict_sunspots)
def mean_forecast_err(y, yhat):
return y.sub(yhat).mean()
mean_forecast_err(dta.SUNACTIVITY, predict_sunspots)
"""
Explanation: This indicates a lack of fit.
In-sample dynamic prediction. How good does our model do?
End of explanation
"""
from statsmodels.tsa.arima_process import ArmaProcess
np.random.seed(1234)
# include zero-th lag
arparams = np.array([1, .75, -.65, -.55, .9])
maparams = np.array([1, .65])
"""
Explanation: Exercise: Can you obtain a better fit for the Sunspots model? (Hint: sm.tsa.AR has a method select_order)
Simulated ARMA(4,1): Model Identification is Difficult
End of explanation
"""
arma_t = ArmaProcess(arparams, maparams)
arma_t.isinvertible
arma_t.isstationary
"""
Explanation: Let's make sure this model is estimable.
End of explanation
"""
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax.plot(arma_t.generate_sample(nsample=50));
arparams = np.array([1, .35, -.15, .55, .1])
maparams = np.array([1, .65])
arma_t = ArmaProcess(arparams, maparams)
arma_t.isstationary
arma_rvs = arma_t.generate_sample(nsample=500, burnin=250, scale=2.5)
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(arma_rvs, lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(arma_rvs, lags=40, ax=ax2)
"""
Explanation: What does this mean?
End of explanation
"""
arma11 = ARIMA(arma_rvs, order=(1, 0, 1)).fit()
resid = arma11.resid
r,q,p = sm.tsa.acf(resid, fft=True, qstat=True)
data = np.c_[range(1,41), r[1:], q, p]
table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"])
print(table.set_index('lag'))
arma41 = ARIMA(arma_rvs, order=(4, 0, 1)).fit()
resid = arma41.resid
r,q,p = sm.tsa.acf(resid, fft=True, qstat=True)
data = np.c_[range(1,41), r[1:], q, p]
table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"])
print(table.set_index('lag'))
"""
Explanation: For mixed ARMA processes the Autocorrelation function is a mixture of exponentials and damped sine waves after (q-p) lags.
The partial autocorrelation function is a mixture of exponentials and dampened sine waves after (p-q) lags.
End of explanation
"""
macrodta = sm.datasets.macrodata.load_pandas().data
macrodta.index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))
cpi = macrodta["cpi"]
"""
Explanation: Exercise: How good of in-sample prediction can you do for another series, say, CPI
End of explanation
"""
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax = cpi.plot(ax=ax);
ax.legend();
"""
Explanation: Hint:
End of explanation
"""
print(sm.tsa.adfuller(cpi)[1])
"""
Explanation: P-value of the unit-root test, resoundingly rejects the null of a unit-root.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/inpe/cmip6/models/besm-2-7/atmos.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'besm-2-7', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: INPE
Source ID: BESM-2-7
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:06
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
meli-lewis/rp_hajcak-foti
|
RP_Hajcak_Foti.ipynb
|
mit
|
data = pd.read_csv("rp.csv")
qgrid.show_grid(data, remote_js=True)
# subset trials depending on whether participant made an error,
# made an error in the previous trial ('predict'), or
# was correct in current and previous trial ('unpred')
error_trials = data[data['startle_type'] == 'error']
pred_trials = data[data['startle_type'] == 'predict']
unpred_trials = data[data['startle_type'] == 'unpred']
# restructure dataframe for python analysis
pred_error = pd.merge(error_trials, pred_trials, how="outer", on=["participant","gender","ERN","EPS","errors"], suffixes=('_error', '_pred'))
# further restructuring
final = pd.merge(pred_error, unpred_trials, how="outer", on=["participant","gender","ERN","EPS","errors"], suffixes=('_', '_unpred'))
final.rename(columns={'startle':'startle_unpred','startle_peak':'startle_peak_unpred'},inplace=True)
# drop superfluous columns naming startle type
final.drop(['startle_type_error','startle_type_pred','startle_type'], axis=1, inplace=True)
# na in participant 21's unpred startle trial excludes it from some analyes
error_trial = error_trials[error_trials.participant != 21]
pred_trials = pred_trials[pred_trials.participant != 21]
unpred_trials = unpred_trials[unpred_trials.participant != 21]
final = final[final.participant != 21]
# mean error-potentiated startle (EPS) amplitude
round(final['EPS'].mean(),2)
#standard error of the mean
round(stats.sem(final['EPS']),2)
# mean difference between error and correct trials
(final['startle_error'] - final['startle_unpred']).mean()
round(stats.sem(final['startle_error'] - final['startle_unpred']),2)
#main finding using one trial type for appropriate DF
corr_data = data[['ERN','EPS']]
corr_data.corr(method='pearson', min_periods=1)
"""
Explanation: The purpose of this notebook is to document data analysis for a Reproducibility Study conducted in collaboration with the Center for Open Science and fulfillment of my undergraduate thesis at Reed College.
The original article was "Errors are aversive: defensive motivation and the error-related negativity."
End of explanation
"""
correct = pd.read_csv('data/rep_Cor_all.txt',delim_whitespace=True)
incorrect = pd.read_csv('data/rep_Inc_All.txt',delim_whitespace=True)
correct['File'] = correct['File'].apply(lambda x: x.translate(None, '_repCor'))
incorrect['File'] = incorrect['File'].apply(lambda x: x.translate(None, '_repInc'))
incorrect = incorrect[['File','2-rep_Inc']]
correct = correct[['File','2-rep_Cor']]
erp = pd.merge(correct, incorrect, on='File')
erp.rename(columns={'File':'participant','2-rep_Cor':'correct','2-rep_Inc':'error'},inplace=True)
erp['participant'] = erp['participant'].apply(lambda x: int(x))
erp['ERN'] = erp['error']-erp['correct']
erp.sort('participant',inplace=True)
# difference between ERPs on correct vs error trials
stats.ttest_rel(erp['correct'], erp['error'])
#mean ERN amplitude
round((erp['error'] - erp['correct']).mean(),2)
# ERN amplitude SEM
round(stats.sem(erp['error'] - erp['correct']),2)
"""
Explanation: calculation of ERN from correct vs error ERPs
End of explanation
"""
Image(url="http://www.frontiersin.org/files/Articles/82577/fnhum-08-00064-HTML/image_m/fnhum-08-00064-g001.jpg")
"""
Explanation: The main finding of the article replicated is A in the following figure.
End of explanation
"""
sns.jointplot(error_trials['ERN'],error_trials['EPS'],kind="reg",stat_func=pearsonr, color = "slategray")
"""
Explanation: I failed to replicate this finding:
End of explanation
"""
high_amplitude = final[final['ERN'] < -6.91]
sns.jointplot(high_amplitude['ERN'],high_amplitude['EPS'],kind="reg",stat_func=pearsonr, color = "slategray")
"""
Explanation: However, the original author also found a more robust subsequent finding by dividing participants along median ERN and assessing correlation among those whose amplitudes were relatively high. We performed the same
End of explanation
"""
high_amp_san_outlier = high_amplitude[high_amplitude['participant'] != 1]
sns.jointplot(high_amp_san_outlier['ERN'],high_amp_san_outlier['EPS'],kind="reg",stat_func=pearsonr, color = "slategray")
"""
Explanation: and also found greater significance (greater correlation, p < .05) However, to investigate whether the outlier participant 1 had any bearing, we removed that data and reran the analysis:
End of explanation
"""
collaboration = pd.read_csv('data/collaboration.csv')
collaboration.rename(columns={'ERN (window difference wave)':'ERN','Error-potentiated startle':'EPS'},inplace=True)
sns.jointplot(collaboration['ERN'],collaboration['EPS'],kind="reg",stat_func=pearsonr, color = "slategray")
"""
Explanation: and found the correlation diminished again.
The original author was communicative and eager to compare findings, and so offered his original ERN and EPS data to pool with our own:
End of explanation
"""
collab_sans_outlier = collaboration[collaboration['ERN'] > -20]
plot = sns.jointplot(collab_sans_outlier['ERN'],collab_sans_outlier['EPS'],kind="reg",stat_func=pearsonr, color = "slategray")
"""
Explanation: The correlation is present here, p < .05, but by coincidence we each had one participant with unusually high ERN amplitudes. We analyzed the correlation with these removed:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/test-institute-3/cmip6/models/sandbox-3/aerosol.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-3', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: TEST-INSTITUTE-3
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
natashabatalha/PandExo
|
notebooks/JWST_Analyzing_Pandexo.ipynb
|
gpl-3.0
|
#load in output from run
out = pk.load(open('singlerun.p','rb'))
#for a single run
x,y, e = jpi.jwst_1d_spec(out, R=100, num_tran=1, model=False, x_range=[1,12])
"""
Explanation: Plot 1D Data with Errorbars
Multiple plotting options exist within jwst_1d_spec
1. Plot a single run
End of explanation
"""
#load in output from multiple runs
multi = pk.load(open('three_nirspec_modes.p','rb'))
#get into list format
list_multi = [multi[0]['NIRSpec G140M'], multi[1]['NIRSpec G235M'], multi[2]['NIRSpec G395M']]
x,y,e = jpi.jwst_1d_spec(list_multi, R=100, model=False, x_range=[1,5])
"""
Explanation: 2. Plot several runs from parameters space run
End of explanation
"""
x,y = jpi.jwst_noise(out)
"""
Explanation: Plot Noise
Several functions exist to plot various outputs.
See jwst_1d_bkg, jwst_1d_snr, jwst_1d_flux,
End of explanation
"""
data = jpi.jwst_2d_sat(out)
"""
Explanation: Plot 2D Saturation Profile
See also jwst_2d_sat to plot saturation profile
End of explanation
"""
|
jswoboda/GeoDataPython
|
Examples/MadrigalExample1.ipynb
|
mit
|
%matplotlib inline
import matplotlib
import os
import scipy as sp
import matplotlib.pyplot as plt
from GeoData.GeoData import GeoData
from GeoData.utilityfuncs import readMad_hdf5
from GeoData.plotting import rangevsparam, rangevstime
"""
Explanation: Using GeoData With Madgrigal
This notebook will give an example of how to use GeoData plotting functions with data from Madgrigal. The example takes ISR data, plots out plasma parameters over both range and time for one beam and then the values from the same beam over one time.
End of explanation
"""
madfile = os.path.join( 'pfa140105.004.hdf5')
data1 = GeoData(readMad_hdf5,[madfile,['nel','te','ti','vo','dnel','dte','dti','dvo']])
data1.data['ne']=sp.power(10.,data1.data['nel'])
data1.data['dne']=sp.power(10.,data1.data['dnel'])
"""
Explanation: Read in Data
The first step is to read the data into the GeoData format. The user does this by creating a GeoData instance, as input it takes the read function, in this case readMad_hdf5, and a list of inputs. The subsequent lines are taking the log scaled electron density and putting them in a linear scale.
End of explanation
"""
fig1,axmat =plt.subplots(2,2,figsize = (12,9),sharey=True,sharex=True)
axvec = axmat.flatten()
desired_beam = data1.dataloc[0,1:]
rangevstime(data1,desired_beam,vbounds=[1e9,8e11],gkey='ne',fig=fig1,ax=axvec[0])
rangevstime(data1,desired_beam,vbounds=[500.,2500.],gkey='te',fig=fig1,ax=axvec[1])
rangevstime(data1,desired_beam,vbounds=[500.,2500.],gkey='ti',fig=fig1,ax=axvec[2])
rangevstime(data1,desired_beam,vbounds=[-50.,50.],gkey='vo',fig=fig1,ax=axvec[3])
plt.tight_layout()
plt.subplots_adjust(top=0.9)
"""
Explanation: Range Time Plots
This set of plots will show the fitted plasma parameter values as a function of range and time. The specific beam chosen is pointed toward zenith so the slant range can be thought of as altitude.
End of explanation
"""
fig2,axmat2 =plt.subplots(2,2,figsize = (12,9),sharey=True,sharex=False)
axvec2 = axmat2.flatten()
paramlist = ['ne','te','ti','vo']
time1=335
handvec = []
pbounds = [[1e10,8e11],[200.,3500.],[200.,3500.],[-100.,100.]]
for inum, iax in enumerate(axvec2):
ploth = rangevsparam(data1,desired_beam,time1,gkey=paramlist[inum],gkeyerr='d'+paramlist[inum],fig=fig1,ax=iax,it=False)
iax.grid(True)
iax.set_xlim(pbounds[inum])
handvec.append(ploth)
plt.tight_layout()
"""
Explanation: Range vs Parameter Plots
Now ploting data from a single time.
End of explanation
"""
|
statsmodels/statsmodels.github.io
|
v0.13.2/examples/notebooks/generated/statespace_cycles.ipynb
|
bsd-3-clause
|
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from pandas_datareader.data import DataReader
endog = DataReader('UNRATE', 'fred', start='1954-01-01')
endog.index.freq = endog.index.inferred_freq
"""
Explanation: Trends and cycles in unemployment
Here we consider three methods for separating a trend and cycle in economic data. Supposing we have a time series $y_t$, the basic idea is to decompose it into these two components:
$$
y_t = \mu_t + \eta_t
$$
where $\mu_t$ represents the trend or level and $\eta_t$ represents the cyclical component. In this case, we consider a stochastic trend, so that $\mu_t$ is a random variable and not a deterministic function of time. Two of methods fall under the heading of "unobserved components" models, and the third is the popular Hodrick-Prescott (HP) filter. Consistent with e.g. Harvey and Jaeger (1993), we find that these models all produce similar decompositions.
This notebook demonstrates applying these models to separate trend from cycle in the U.S. unemployment rate.
End of explanation
"""
hp_cycle, hp_trend = sm.tsa.filters.hpfilter(endog, lamb=129600)
"""
Explanation: Hodrick-Prescott (HP) filter
The first method is the Hodrick-Prescott filter, which can be applied to a data series in a very straightforward method. Here we specify the parameter $\lambda=129600$ because the unemployment rate is observed monthly.
End of explanation
"""
mod_ucarima = sm.tsa.UnobservedComponents(endog, 'rwalk', autoregressive=4)
# Here the powell method is used, since it achieves a
# higher loglikelihood than the default L-BFGS method
res_ucarima = mod_ucarima.fit(method='powell', disp=False)
print(res_ucarima.summary())
"""
Explanation: Unobserved components and ARIMA model (UC-ARIMA)
The next method is an unobserved components model, where the trend is modeled as a random walk and the cycle is modeled with an ARIMA model - in particular, here we use an AR(4) model. The process for the time series can be written as:
$$
\begin{align}
y_t & = \mu_t + \eta_t \
\mu_{t+1} & = \mu_t + \epsilon_{t+1} \
\phi(L) \eta_t & = \nu_t
\end{align}
$$
where $\phi(L)$ is the AR(4) lag polynomial and $\epsilon_t$ and $\nu_t$ are white noise.
End of explanation
"""
mod_uc = sm.tsa.UnobservedComponents(
endog, 'rwalk',
cycle=True, stochastic_cycle=True, damped_cycle=True,
)
# Here the powell method gets close to the optimum
res_uc = mod_uc.fit(method='powell', disp=False)
# but to get to the highest loglikelihood we do a
# second round using the L-BFGS method.
res_uc = mod_uc.fit(res_uc.params, disp=False)
print(res_uc.summary())
"""
Explanation: Unobserved components with stochastic cycle (UC)
The final method is also an unobserved components model, but where the cycle is modeled explicitly.
$$
\begin{align}
y_t & = \mu_t + \eta_t \
\mu_{t+1} & = \mu_t + \epsilon_{t+1} \
\eta_{t+1} & = \eta_t \cos \lambda_\eta + \eta_t^ \sin \lambda_\eta + \tilde \omega_t \qquad & \tilde \omega_t \sim N(0, \sigma_{\tilde \omega}^2) \
\eta_{t+1}^ & = -\eta_t \sin \lambda_\eta + \eta_t^ \cos \lambda_\eta + \tilde \omega_t^ & \tilde \omega_t^* \sim N(0, \sigma_{\tilde \omega}^2)
\end{align}
$$
End of explanation
"""
fig, axes = plt.subplots(2, figsize=(13,5));
axes[0].set(title='Level/trend component')
axes[0].plot(endog.index, res_uc.level.smoothed, label='UC')
axes[0].plot(endog.index, res_ucarima.level.smoothed, label='UC-ARIMA(2,0)')
axes[0].plot(hp_trend, label='HP Filter')
axes[0].legend(loc='upper left')
axes[0].grid()
axes[1].set(title='Cycle component')
axes[1].plot(endog.index, res_uc.cycle.smoothed, label='UC')
axes[1].plot(endog.index, res_ucarima.autoregressive.smoothed, label='UC-ARIMA(2,0)')
axes[1].plot(hp_cycle, label='HP Filter')
axes[1].legend(loc='upper left')
axes[1].grid()
fig.tight_layout();
"""
Explanation: Graphical comparison
The output of each of these models is an estimate of the trend component $\mu_t$ and an estimate of the cyclical component $\eta_t$. Qualitatively the estimates of trend and cycle are very similar, although the trend component from the HP filter is somewhat more variable than those from the unobserved components models. This means that relatively mode of the movement in the unemployment rate is attributed to changes in the underlying trend rather than to temporary cyclical movements.
End of explanation
"""
|
ethen8181/machine-learning
|
model_selection/imbalanced/imbalanced_metrics.ipynb
|
mit
|
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_recall_curve, roc_curve
from sklearn.metrics import precision_score, recall_score, f1_score
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn,matplotlib
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Evaluating-Imbalanced-Datasets" data-toc-modified-id="Evaluating-Imbalanced-Datasets-1"><span class="toc-item-num">1 </span>Evaluating Imbalanced Datasets</a></span><ul class="toc-item"><li><span><a href="#Dataset" data-toc-modified-id="Dataset-1.1"><span class="toc-item-num">1.1 </span>Dataset</a></span></li><li><span><a href="#Class-Weighting" data-toc-modified-id="Class-Weighting-1.2"><span class="toc-item-num">1.2 </span>Class Weighting</a></span></li><li><span><a href="#F1-Score" data-toc-modified-id="F1-Score-1.3"><span class="toc-item-num">1.3 </span>F1 Score</a></span></li><li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-1.4"><span class="toc-item-num">1.4 </span>Conclusion</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
filepath = os.path.join('data', 'creditcard.csv')
df = pd.read_csv(filepath)
print('dimension: ', df.shape)
df.head()
"""
Explanation: Evaluating Imbalanced Datasets
This documentation illustrates the trade off between True Positive Rate and False Positive Rate using ROC and Precision/Recall (PR) curves. In the end, we will take a look at why, for binary classification problem, apart from solely using the popular evaluation metric ROC curve we should also look at other evaluation metric such as precision and recall especially when working with highly imbalanced dataset.
Dataset
The dataset we'll be using today can be downloaded from the Kaggle website.
End of explanation
"""
df['hour'] = np.ceil(df['Time'].values / 3600) % 24
fraud_over_hour = df.pivot_table(values='Amount', index='hour', columns='Class', aggfunc='count')
fraud_over_hour
plt.rcParams['font.size'] = 12
plt.rcParams['figure.figsize'] = 8, 6
plt.plot(fraud_over_hour[1])
plt.title('Fraudulent Transaction over Hour')
plt.ylabel('Fraudulent Count')
plt.xlabel('Hour')
plt.show()
# prepare the dataset for modeling;
# extract the features and labels, perform a quick train/test split
label = df['Class']
pca_cols = [col for col in df.columns if col.startswith('V')]
input_cols = ['hour', 'Amount'] + pca_cols
df = df[input_cols]
df_train, df_test, y_train, y_test = train_test_split(
df, label, stratify=label, test_size=0.35, random_state=1)
print('training data dimension:', df_train.shape)
df_train.head()
# we'll be using linear models later, hence
# we standardize our features to ensure they are
# all at the same scale
standardize = StandardScaler()
X_train = standardize.fit_transform(df_train)
X_test = standardize.transform(df_test)
label_distribution = np.bincount(label) / label.size
print('labels distribution:', label_distribution)
print('Fraud is {}% of our data'.format(label_distribution[1] * 100))
"""
Explanation: A brief description of the dataset based on the data overview section from the download source.
The datasets contains transactions made by credit cards in September 2013 by european cardholders. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions.
Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction amount. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.
The only feature-engineering that we'll be doing for now is to convert the feature "Time" (seconds from which the very first data observation took place) to hours of a day. While we're at it, let's take a look at a breakdown of our legit vs fraud transactions via a pivot table and a plot of fraud transactions' count over time for a quick exploratory data analysis.
End of explanation
"""
fig = plt.figure(figsize=(15, 8))
ax1 = fig.add_subplot(1, 2, 1)
ax1.set_xlim([-0.05, 1.05])
ax1.set_ylim([-0.05, 1.05])
ax1.set_xlabel('Recall')
ax1.set_ylabel('Precision')
ax1.set_title('PR Curve')
ax2 = fig.add_subplot(1, 2, 2)
ax2.set_xlim([-0.05, 1.05])
ax2.set_ylim([-0.05, 1.05])
ax2.set_xlabel('False Positive Rate')
ax2.set_ylabel('True Positive Rate')
ax2.set_title('ROC Curve')
f1_scores = []
recall_scores = []
precision_scores = []
pos_weights = [1, 10, 25, 50, 100, 10000]
for pos_weight in pos_weights:
lr_model = LogisticRegression(class_weight={0: 1, 1: pos_weight})
lr_model.fit(X_train, y_train)
# plot the precision-recall curve and AUC curve
pred_prob = lr_model.predict_proba(X_test)[:, 1]
precision, recall, _ = precision_recall_curve(y_test, pred_prob)
tpr, fpr, _ = roc_curve(y_test, pred_prob)
ax1.plot(recall, precision, label=pos_weight)
ax2.plot(tpr, fpr, label=pos_weight)
# track the precision, recall and f1 score
pred = lr_model.predict(X_test)
f1_test = f1_score(y_test, pred)
recall_test = recall_score(y_test, pred)
precision_test = precision_score(y_test, pred)
f1_scores.append(f1_test)
recall_scores.append(recall_test)
precision_scores.append(precision_test)
ax1.legend(loc='lower left')
ax2.legend(loc='lower right')
plt.show()
"""
Explanation: Class Weighting
With scikit-learn, we can give higher weights to the minority class (the model will be penalized more when misclassifying a minority class) by modifying the class_weight argument during model initialization. Let's see what affect will this have with our model. The following code chunk manually selects a range of weights to boost the minority class and tracks various metrics to see the model's performance across different class weighting values.
Note that the following section assumes knowledge of model performance metric such as precision, recall and AUC. The following link contains resources into those concepts if needed. Notebook: AUC (Area under the ROC curve and precision/recall curve) from scratch
End of explanation
"""
def score_barplot(precision_scores, recall_scores, f1_scores, pos_weights, figsize=(8, 6)):
"""Visualize precision/recall/f1 score at different class weighting values."""
width = 0.3
ind = np.arange(len(precision_scores))
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(111)
b1 = ax.bar(ind, precision_scores, width, color='lightskyblue')
b2 = ax.bar(ind + width, recall_scores, width, color='lightcoral')
b3 = ax.bar(ind + (2 * width), f1_scores, width, color='gold')
ax.set_xticks(ind + width)
ax.set_xticklabels(pos_weights)
ax.set_ylabel('score')
ax.set_xlabel('positive weights')
ax.set_ylim(0, 1.3)
ax.legend(handles=[b1, b2, b3], labels=['precision', 'recall', 'f1'])
plt.tight_layout()
plt.show()
score_barplot(precision_scores, recall_scores, f1_scores, pos_weights)
"""
Explanation: A good classifier would have a PR (Precision/Recall) curve closer to the upper-right corner and a ROC curve to the upper-left corner. Based on the plot above, we can see that while both curves uses the same underlying data, i.e. the real class labels and the predicted probability, the two charts can tell different stories, with some weights seem to perform better based on the precision/recall curve's chart.
To be explicit, different settings of the class_weight argument all seem to perform pretty well for ROC curve, but some poorly for PR curve. This is due to the fact that for ROC curve, one of the axis shows the false positive rate (number of false positives / total number of negatives), and this ratio won't change much when the total number of negatives is extremely large. Whereas for PR curve, one of the axis, precision (number of true positives / total number of predicted positives), is less affected by this.
Another way to visualize the model's performance metric is to use a bar-plot to visualize the precision/recall/f1 score at different class weighting values.
End of explanation
"""
# this code chunk shows the same idea applies when using tree-based models
f1_scores = []
recall_scores = []
precision_scores = []
pos_weights = [1, 10, 100, 10000]
for pos_weight in pos_weights:
rf_model = RandomForestClassifier(n_estimators=50, max_depth=6, n_jobs=-1,
class_weight={0: 1, 1: pos_weight})
rf_model.fit(df_train, y_train)
# track the precision, recall and f1 score
pred = rf_model.predict(df_test)
f1_test = f1_score(y_test, pred)
recall_test = recall_score(y_test, pred)
precision_test = precision_score(y_test, pred)
f1_scores.append(f1_test)
recall_scores.append(recall_test)
precision_scores.append(precision_test)
score_barplot(precision_scores, recall_scores, f1_scores, pos_weights)
"""
Explanation: Judging from the plot above, the can see that when the weight's value is set at 10, we seem to have strike a good balance between precision and recall (this setting has the highest f1 score, we'll have a deeper discussion on f1 score in the next section), where our model can detect 80% of the fraudulent transaction, while not annoying a bunch of customers with false positives. Another observation is that if we were to set the class weighting value to 10,000 we would be able to increase our recall score at the expense of more mis-classified legit cases (as depicted by the low precision score).
End of explanation
"""
|
willsa14/ras2las
|
curvefit/CNN_log-data_extraction.ipynb
|
mit
|
import pylab as plt
# %matplotlib inline
import numpy as np
"""
Explanation: Using CNN to extract data from plots
We'll start by making synthetic images of plots that look like "real" log plots
Train a 5-6 layer CNN using Keras
Return 20 inferred points from the RGB image fed. (the idea is to segment the log in chunks)
Repeat until it works
TODO:
- Use OCR output to scale points extracted to physical units
- Expand the amount of points extracted (from 20 to 100s?)
- Test/Generate other types of log plots.
End of explanation
"""
def gen_im(n=1):
rn = np.random.uniform(low=0, high=1, size=(20,))
# Make a random plot...
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(rn,"r")
ax.grid()
DPI = fig.get_dpi()
fig.set_size_inches(150/float(DPI),150/float(DPI))
# fig.set_size_inches(3,3)
fig.canvas.draw()
# Now we can save it to a numpy array.
data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
# data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
plt.close('all')
return data, rn, fig.canvas.get_width_height()[::-1]
_,_,size = gen_im(n=1)
"""
Explanation: Synthetic image generation function:
End of explanation
"""
def batch_gen(n=1):
for i in range(n):
if i==0:
arr, vals, size = gen_im()
else:
tmparr, tmpvals, size = gen_im()
arr = np.vstack((arr,tmparr))
vals = np.vstack((vals,tmpvals))
arr = arr.reshape(arr.shape[0], *size, 3)[:,:,:,:]
return arr, vals
"""
Explanation: Batch image generation function:
End of explanation
"""
x_train, y_train = batch_gen(n=1000)
x_test, y_test = batch_gen(n=100)
"""
Explanation: Let's start with 1000 training samples and 100 test samples
End of explanation
"""
plt.imshow(x_train[0,:].reshape(*size,3))
plt.show()
"""
Explanation: We reshaped the images to be squared just because it's easier. Here's how they look:
End of explanation
"""
import keras
from keras.layers import Dense, Flatten, Dropout, Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.layers.convolutional import ZeroPadding2D
from keras.layers import Activation
from keras.optimizers import SGD
import matplotlib.pylab as plt
batch_size = 1
num_classes = 20
epochs = 10
# input image dimensions
img_x, img_y = size
input_shape = (img_x, img_y, 3)
print('x_train shape:', x_train.shape)
filter_size = 3
pool_size = 2
model = Sequential([
Conv2D(32, (6, 6), input_shape=input_shape, data_format="channels_last", activation='relu'),
MaxPooling2D(pool_size=(pool_size, pool_size)),
Conv2D(64, (filter_size, filter_size), data_format="channels_last", activation='relu'),
MaxPooling2D(pool_size=(pool_size, pool_size)),
Conv2D(128, (filter_size, filter_size), data_format="channels_last", activation='relu'),
# # MaxPooling2D(pool_size=(pool_size, pool_size)),
Conv2D(128, (filter_size, filter_size), data_format="channels_last", activation='relu'),
# # MaxPooling2D(pool_size=(pool_size, pool_size)),
Flatten(),
# Dropout(0.4),
# Dense(1000, activation='relu'),
Dropout(0.4),
Dense(num_classes, activation='linear'),
])
model.compile('adadelta', 'mse')
"""
Explanation: CNN
Let's implement CNN's until they work!
End of explanation
"""
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test),
callbacks=[history])
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score)
"""
Explanation: Train
End of explanation
"""
model.predict(x_test[0].reshape(1,*size, 3))
plt.plot(model.predict(x_test)[0])
plt.plot(y_test[0])
plt.show()
"""
Explanation: Previous run took ~10mins
Results so far...
End of explanation
"""
from PIL import Image
im = Image.open("log_crop_smaller.png")
im
im = im.resize((150,150)).rotate(90)
im
im_arr = np.array(im.getdata())[:,:-1].reshape(1, im.size[1], im.size[0], 3)
plt.plot(model.predict(im_arr)[0])
plt.show()
"""
Explanation: Not bad at all!
For our last trick before we run out of time, a real image!
End of explanation
"""
fig, (ax0, ax1) = plt.subplots(nrows=2, sharex=True)
ax0.imshow(im, extent=(-2, 19, 0, 1), aspect=4)
ax0.set_title('Real Log')
ax0.set_xlim(-3,20)
ax1.plot(model.predict(im_arr)[0],'r')
ax1.set_title('Predicted Log')
ax1.set_ylim(0,1)
plt.show()
"""
Explanation: Trying to lineup things:
End of explanation
"""
|
tensorflow/docs-l10n
|
site/zh-cn/tfx/tutorials/data_validation/tfdv_basic.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright © 2020 The TensorFlow Authors.
End of explanation
"""
try:
import colab
!pip install --upgrade pip
except:
pass
"""
Explanation: TensorFlow Data Validation
TensorFlow Extended 关键组件示例
注:您现在可以在 Jupyter 风格的笔记本中运行此示例而无需进行设置!只需点击“在 Google Colab 中运行”
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/tfx/tutorials/data_validation/tfdv_basic"> <img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tfx/tutorials/data_validation/tfdv_basic.ipynb"> <img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tfx/tutorials/data_validation/tfdv_basic.ipynb"> <img width="32px" src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
</table></div>
此示例 Colab 笔记本将演示如何使用 TensorFlow Data Validation (TFDV) 调查和呈现数据集。包括查看描述性统计信息、推断架构、检查和修复异常,以及检查数据集中的漂移和偏差。了解数据集的特点非常重要,包括它在生产流水线中随时间变化的方式。查找数据中的异常,并比较训练、评估和应用数据集以确保它们保持一致也很重要。
我们将使用的数据来自芝加哥市发布的 Taxi Trips 数据集。
注:本网站提供的应用所使用的数据来自原始源(www.cityofchicago.org,芝加哥市官方网站),但在使用时进行了修改。芝加哥市不对本网站提供的任何数据的内容、准确性、时效性或完整性承担任何责任。本网站提供的数据可能会随时更改。您了解并同意,使用本网站提供的数据须自担风险。
您可以在 Google BigQuery 中详细了解此数据集,并在 BigQuery 界面中探索完整的数据集。
要点:作为建模者和开发者,请思考如何使用这些数据以及模型预测的潜在好处和危害。此类模型可能会加剧社会偏见和差距。某个特征是与您要解决的问题相关,还是会引入偏见?有关更多信息,请阅读 ML 公平性。
数据集中的各列为:
<table>
<tr>
<td>pickup_community_area</td>
<td>fare</td>
<td>trip_start_month</td>
</tr>
<tr>
<td>trip_start_hour</td>
<td>trip_start_day</td>
<td>trip_start_timestamp</td>
</tr>
<tr>
<td>pickup_latitude</td>
<td>pickup_longitude</td>
<td>dropoff_latitude</td>
</tr>
<tr>
<td>dropoff_longitude</td>
<td>trip_miles</td>
<td>pickup_census_tract</td>
</tr>
<tr>
<td>dropoff_census_tract</td>
<td>payment_type</td>
<td>company</td>
</tr>
<tr>
<td>trip_seconds</td>
<td>dropoff_community_area</td>
<td>tips</td>
</tr>
</table>
升级 Pip
为了避免在本地运行时升级系统中的 Pip,请检查以确保我们在 Colab 中运行。当然,可以单独升级本地系统。
End of explanation
"""
!pip install tensorflow==2.2.0
"""
Explanation: 安装 TensorFlow
注:在 Google Colab 中,由于软件包更新,第一次运行此代码单元时必须重新启动运行时 (Runtime > Restart runtime ...)。
End of explanation
"""
import sys
# Confirm that we're using Python 3
assert sys.version_info.major is 3, 'Oops, not running Python 3. Use Runtime > Change runtime type'
"""
Explanation: 检查 Python 版本
End of explanation
"""
import tensorflow as tf
print('Installing TensorFlow Data Validation')
!pip install -q tensorflow_data_validation[visualization]
"""
Explanation: 安装 TFDV
这将拉取所有依赖项,并需要花点时间。请忽略有关不兼容的依赖项版本的警告或错误。
注:在 Google Colab 中,由于软件包更新,第一次运行此代码单元时必须重新启动运行时 (Runtime > Restart runtime ...)。
End of explanation
"""
import os
import tempfile, urllib, zipfile
# Set up some globals for our file paths
BASE_DIR = tempfile.mkdtemp()
DATA_DIR = os.path.join(BASE_DIR, 'data')
OUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')
TRAIN_DATA = os.path.join(DATA_DIR, 'train', 'data.csv')
EVAL_DATA = os.path.join(DATA_DIR, 'eval', 'data.csv')
SERVING_DATA = os.path.join(DATA_DIR, 'serving', 'data.csv')
# Download the zip file from GCP and unzip it
zip, headers = urllib.request.urlretrieve('https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/chicago_data.zip')
zipfile.ZipFile(zip).extractall(BASE_DIR)
zipfile.ZipFile(zip).close()
print("Here's what we downloaded:")
!ls -R {os.path.join(BASE_DIR, 'data')}
"""
Explanation: 是否已重新启动运行时?
如果您使用的是 Google Colab,首次运行上面的代码单元时,必须重新启动运行时 (Runtime > Restart runtime ...)。这样做的原因是 Colab 加载软件包的方式。
加载文件
我们将从 Google Cloud Storage 下载数据集。
End of explanation
"""
import tensorflow_data_validation as tfdv
print('TFDV version: {}'.format(tfdv.version.__version__))
"""
Explanation: 检查版本
End of explanation
"""
train_stats = tfdv.generate_statistics_from_csv(data_location=TRAIN_DATA)
"""
Explanation: 计算并可视化统计信息
首先,我们将使用 tfdv.generate_statistics_from_csv 计算训练数据的统计信息。(请忽略简短警告)
TFDV 可以计算描述性统计信息,有助于快速了解数据中存在的特征及其值分布的形状。
在内部,TFDV 使用 Apache Beam 的数据并行处理框架来扩展对大型数据集的统计信息计算。对于希望与 TFDV 进行更深入集成的应用(例如,在数据生成流水线的末端附加统计信息生成),该 API 还公开了 Beam PTransform 用于统计信息生成。
End of explanation
"""
tfdv.visualize_statistics(train_stats)
"""
Explanation: 现在,我们来使用 tfdv.visualize_statistics,它使用 Facets 为我们的训练数据创建简洁的可视化效果:
请注意,数字特征和分类特征会分别可视化,并且显示的图表会展示每个特征的分布。
请注意,缺少值或值为零的特征的百分比将显示为红色(作为视觉指示,表明这些特征中的样本可能存在问题)。百分比是该特征缺少值或值为零的样本的百分比。
请注意,没有具有 pickup_census_tract 值的样本。这是一个降维的机会!
尝试点击图表上方的“expand”以更改显示
尝试将鼠标悬停在图表中的条形图上以显示桶范围和计数
尝试在对数尺度和线性尺度之间切换,并注意对数尺度如何显示有关 payment_type 分类特征的更多详细信息
尝试从“Chart to show”菜单中选择“quantiles”,然后将鼠标悬停在标记上以显示分位数百分比
End of explanation
"""
schema = tfdv.infer_schema(statistics=train_stats)
tfdv.display_schema(schema=schema)
"""
Explanation: 推断架构
现在,我们使用 tfdv.infer_schema 为我们的数据创建架构。架构定义了与 ML 相关的数据约束。示例约束包括每个特征的数据类型(是数字特征还是分类特征),或其在数据中的出现频率。对于分类特征,架构还定义了域(可接受值的列表)。由于编写架构可能是一项繁琐的任务,特别是对于具有许多特征的数据集,TFDV 提供了一种根据描述性统计信息生成架构初始版本的方式。
获得正确的架构非常重要,因为我们的其他生产流水线将依赖于 TFDV 生成架构的正确性。架构还提供数据的文档,这在不同开发者处理同一数据时非常有用。我们使用 tfdv.display_schema 来显示推断的架构,以便对其进行检查。
End of explanation
"""
# Compute stats for evaluation data
eval_stats = tfdv.generate_statistics_from_csv(data_location=EVAL_DATA)
# Compare evaluation data with training data
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
"""
Explanation: 检查评估数据有无错误
到目前为止,我们只查看了训练数据。评估数据与训练数据保持一致非常重要,包括使用相同的架构。同样重要的是,评估数据应包含与训练数据的数值特征值范围大致相同的样本,从而使评估期间我们对损失表面的覆盖范围与训练期间大致相同。对于分类特征也是如此。否则,我们可能会因为没有评估部分损失表面,而在评估期间遇到未发现的训练问题。
请注意,现在每个特征都包括训练数据集和评估数据集的统计信息。
请注意,图表现在同时叠加了训练数据集和评估数据集,便于进行比较。
请注意,现在图表包括一个百分比视图,可以将其与对数尺度或默认的线性尺度结合使用。
请注意,训练数据集和评估数据集的 trip_miles 平均值和中位数不同。这会引起问题吗?
哇哦,训练数据集和评估数据集的 tips 也迥然不同。这会引起问题吗?
点击“Numeric Features”图表上的“expand”,选择对数尺度。查看 trip_seconds 特征,并注意最大值之间的差异。评估会漏掉部分损失表面吗?
End of explanation
"""
# Check eval data for errors by validating the eval data stats using the previously inferred schema.
anomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)
tfdv.display_anomalies(anomalies)
"""
Explanation: 检查评估异常
我们的评估数据集是否与训练数据集中的架构匹配?这对于分类特征尤其重要,因为我们要确定可接受值的范围。
要点:如果我们尝试使用不在训练数据集中的具有分类特征值的数据进行评估,会发生什么呢?如果使用超出训练数据集范围的数字特征又会如何?
End of explanation
"""
# Relax the minimum fraction of values that must come from the domain for feature company.
company = tfdv.get_feature(schema, 'company')
company.distribution_constraints.min_domain_mass = 0.9
# Add new value to the domain of feature payment_type.
payment_type_domain = tfdv.get_domain(schema, 'payment_type')
payment_type_domain.value.append('Prcard')
# Validate eval stats after updating the schema
updated_anomalies = tfdv.validate_statistics(eval_stats, schema)
tfdv.display_anomalies(updated_anomalies)
"""
Explanation: 修复架构中的评估异常
糟糕!看起来我们的评估数据中有一些 company 的新值,而训练数据中却没有。payment_type 也有一个新值。这些应被视为异常,但是决定如何处理它们取决于我们对数据的领域知识。如果异常确实表示数据错误,则应修复基础数据。否则,我们只需更新架构以在评估数据集中包含这些值。
要点:如果不修复这些问题,我们的评估结果会受到怎样的影响?
除非更改评估数据集,否则我们无法修复所有问题,但可以修复架构中我们愿意接受的问题。这包括放宽我们对特定特征异常的判断标准,以及更新我们的架构以包括分类特征的缺失值。TFDV 使我们能够发现需要修复的内容。
现在我们来进行修复,然后再检查一次。
End of explanation
"""
serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA)
serving_anomalies = tfdv.validate_statistics(serving_stats, schema)
tfdv.display_anomalies(serving_anomalies)
"""
Explanation: 看!经过验证,训练数据和评估数据现在是一致的!感谢 TFDV ;)
架构环境
在此示例中,我们还拆分出了一个“应用”数据集,因此我们也应该对它进行检查。默认情况下,流水线中的所有数据集都应使用相同的架构,但通常会有例外。例如,在监督学习中,我们需要在数据集中包括标签,但当我们应用模型进行推断时,则不包括标签。在某些情况下,有必要引入轻微的架构变化。
可以使用环境来表示此类需求。特别是,可以使用 default_environment、in_environment 和 not_in_environment 将架构中的特征与一组环境相关联。
例如,tips 特征被作为训练标签包含在此数据集中,但却没有包含在应用数据中。如果未指定环境,它将显示为异常。
End of explanation
"""
options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)
serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA, stats_options=options)
serving_anomalies = tfdv.validate_statistics(serving_stats, schema)
tfdv.display_anomalies(serving_anomalies)
"""
Explanation: 我们将在下文处理 tips 特征。在 trip_seconds 中还有一个 INT 值,架构在此处需要一个 FLOAT。通过让我们意识到这种差异,TFDV 有助于发现用于训练和应用的数据生成方式的不一致。在模型性能受到影响(有时是灾难性的)之前,很容易忽视此类问题。问题可能重要也可能不重要,但无论如何都应做进一步调查。
在本例中,我们可以安全地将 INT 值转换为 FLOAT,以便让 TFDV 使用我们的架构来推断类型。现在开始吧。
End of explanation
"""
# All features are by default in both TRAINING and SERVING environments.
schema.default_environment.append('TRAINING')
schema.default_environment.append('SERVING')
# Specify that 'tips' feature is not in SERVING environment.
tfdv.get_feature(schema, 'tips').not_in_environment.append('SERVING')
serving_anomalies_with_env = tfdv.validate_statistics(
serving_stats, schema, environment='SERVING')
tfdv.display_anomalies(serving_anomalies_with_env)
"""
Explanation: 现在,只有 tips 特征(这是标签)显示为异常(“Column dropped”)。当然,我们不希望在应用数据中包含标签,因此我们告诉 TFDV 忽略标签。
End of explanation
"""
# Add skew comparator for 'payment_type' feature.
payment_type = tfdv.get_feature(schema, 'payment_type')
payment_type.skew_comparator.infinity_norm.threshold = 0.01
# Add drift comparator for 'company' feature.
company=tfdv.get_feature(schema, 'company')
company.drift_comparator.infinity_norm.threshold = 0.001
skew_anomalies = tfdv.validate_statistics(train_stats, schema,
previous_statistics=eval_stats,
serving_statistics=serving_stats)
tfdv.display_anomalies(skew_anomalies)
"""
Explanation: 检查漂移和偏差
除了检查数据集是否符合在架构中设置的期望之外,TFDV 还提供了检测漂移和偏差的功能。TFDV 执行此检查的方式为:根据架构中指定的漂移/偏差比较器来比较不同数据集的统计信息。
漂移
支持对分类特征以及在数据的连续跨度之间(即跨度 N 和跨度 N+1 之间)进行漂移检测(例如训练数据的不同天数之间)。我们用切比雪夫距离来表示漂移,您可以设置阈值距离,以便在漂移高于可接受范围时收到警告。设置正确的距离通常是一个迭代过程,需要领域知识和实验。
偏差
TFDV 可以检测数据中三种不同类型的偏差:架构偏差、特征偏差和分布偏差。
架构偏差
当训练数据和应用数据不符合同一个架构时,会发生架构偏差。训练数据和应用数据都应遵循同一个架构。两者之间的任何预期偏差(例如,仅训练数据中存在但应用数据中不存在的标签特征)都应通过架构中的环境字段指定。
特征偏差
当模型训练所采用的特征值与它在应用时看到的特征值不同时,会发生特征偏差。例如,这可能在以下情况下发生:
提供某些特征值的数据源在训练时和应用时之间被修改。
用于在训练和应用之间生成特征的逻辑不同。例如,如果仅在两个代码路径之一中应用某些转换。
分布偏差
当训练数据集的分布与应用数据集的分布明显不同时,会发生分布偏差。分布偏差的主要原因之一是使用不同的代码或不同的数据源来生成训练数据集。另一个原因是错误的采样机制选择了应用数据中无代表性的子样本来进行训练。
End of explanation
"""
from tensorflow.python.lib.io import file_io
from google.protobuf import text_format
file_io.recursive_create_dir(OUTPUT_DIR)
schema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt')
tfdv.write_schema_text(schema, schema_file)
!cat {schema_file}
"""
Explanation: 在此示例中,我们确实看到了一些漂移,但它远低于我们设置的阈值。
冻结架构
现在,已经对架构进行了检查和整理,我们将其存储在文件中以反映其“冻结”状态。
End of explanation
"""
|
bbfamily/abu
|
abupy_lecture/30-趋势跟踪与均值回复的长短线搭配.ipynb
|
gpl-3.0
|
# 基础库导入
from __future__ import print_function
from __future__ import division
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
import sys
# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 使用沙盒数据,目的是和书中一样的数据环境
abupy.env.enable_example_env_ipython()
us_choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL', 'usGOOG', 'usWUBA', 'usVIPS']
cn_choice_symbols = ['002230', '300104', '300059', '601766', '600085', '600036', '600809', '000002', '002594', '002739']
hk_choice_symbols = ['hk03333', 'hk00700', 'hk02333', 'hk01359', 'hk00656', 'hk03888', 'hk02318']
from abupy import AbuDoubleMaBuy, AbuDoubleMaSell, ABuKLUtil, ABuSymbolPd, AbuUpDownTrend, AbuDownUpTrend, AbuUpDownGolden
from abupy import AbuFactorCloseAtrNStop, AbuFactorAtrNStop, AbuFactorPreAtrNStop, tl
from abupy import abu, ABuProgress, AbuMetricsBase, EMarketTargetType, ABuMarketDrawing
"""
Explanation: ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align:middle;padding:10px 20px;"><font size="6" color="black"><b>第30节 趋势跟踪与均值回复的长短线搭配</b></font>
</center>
作者: 阿布
阿布量化版权所有 未经允许 禁止转载
abu量化系统github地址 (欢迎+star)
本节ipython notebook
上一节讲解了多因子策略并行执行配合的示例,本节讲解趋势跟踪与均值回复的长短线搭配的示例。
首先导入本节需要使用的abupy中的模块:
End of explanation
"""
# 初始资金量
cash = 3000000
def run_loo_back(choice_symbols, ps=None, n_folds=3, start=None, end=None, only_info=False):
"""封装一个回测函数,返回回测结果,以及回测度量对象"""
if choice_symbols[0].startswith('us'):
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_US
else:
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
abu_result_tuple, _ = abu.run_loop_back(cash,
buy_factors,
sell_factors,
ps,
start=start,
end=end,
n_folds=n_folds,
choice_symbols=choice_symbols)
ABuProgress.clear_output()
metrics = AbuMetricsBase.show_general(*abu_result_tuple, returns_cmp=only_info,
only_info=only_info,
only_show_returns=True)
return abu_result_tuple, metrics
# 买入策略使用AbuDownUpTrend
buy_factors = [{'class': AbuDownUpTrend}]
# 卖出策略:利润保护止盈策略+风险下跌止损+较大的止盈位
sell_factors = [{'stop_loss_n': 1.0, 'stop_win_n': 3.0,
'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}]
# 开始回测
abu_result_tuple, metrics = run_loo_back(us_choice_symbols, only_info=True)
"""
Explanation: 在《量化交易之路》中量化入门章节讲过趋势跟踪和均值回复的概念以及策略示例,量化交易系统中策略的原型只有趋势跟踪和均值回复,不管多么复杂的策略最终都会落在这两个基础策略概念范围内。
很多买入策略本身并不能定性为趋势跟踪策略或者均值回复策略,之前的教程一直作为示例使用的海龟突破策略属于一种最简单‘直来直去’的策略,它自身带有明显的趋势跟踪属性,‘直来直去’不代表不好,但是很多时候需要在策略中通过使用非均衡技术进一步构建概率优势,当然这样做也会付出代价,代价就是信号发出的频率会明显降低。
本节将讲解在策略中使用长短线分析进一步构建非均衡概率优势。
1. 长线趋势下跌与短线趋势上涨
什么叫做长短线分析?
即把整个择时周期分成两部分,长的为长线择时,短的为短线择时,比如一个示例策略如下:
寻找长线下跌的股票,比如一个季度(4个月)整体趋势为下跌趋势
短线走势上涨的股票,比如一个月整体趋势为上涨趋势
最后使用海龟突破的N日突破策略作为策略最终买入信号
上面文字描述的策略如图所示:
这种长短线策略一个很大的特点为策略本身并不能定性为趋势跟踪策略或者均值回测策略,决定策略到底为趋势跟踪还是均值回复的主要决定在于卖出策略:
使用较大的止盈位置则上述策略定性为趋势跟踪策略,认为短线上涨形成趋势成立,买入后的期望是后期走势可以长时间保持短线目前的上涨趋势
使用较小的止盈位置则上述策略定性为均值回复策略,认为短线上涨为长线下跌的回复,买入后的期望是可以短时间内继续保持涨趋势
首先使用较大的止盈位置,则策略定性为趋势跟踪策略,abupy内置的AbuDownUpTrend策略为上述策略的代码实现, 关键策略代码如下:
def fit_day(self, today):
"""
长线下跌中寻找短线突破反转买入择时因子
1. 通过past_today_kl获取长周期的金融时间序列,通过AbuTLine中的is_down_trend判断
长周期是否属于下跌趋势,
2. 今天收盘价为最近xd天内最高价格,且短线xd天的价格走势为上升趋势
3. 满足1,2发出买入信号
:param today: 当前驱动的交易日金融时间序列数据
"""
long_kl = self.past_today_kl(today, self.past_factor * self.xd)
tl_long = AbuTLine(long_kl.close, 'long')
# 判断长周期是否属于下跌趋势
if tl_long.is_down_trend(down_deg_threshold=self.down_deg_threshold, show=False):
if today.close == self.xd_kl.close.max() and AbuTLine(
self.xd_kl.close, 'short').is_up_trend(up_deg_threshold=-self.down_deg_threshold, show=False):
# 今天收盘价为最近xd天内最高价格,且短线xd天的价格走势为上升趋势
return self.buy_tomorrow()
更多具体实现请阅读源代码,下面做回测示例,如下:
End of explanation
"""
ABuMarketDrawing.plot_candle_from_order(abu_result_tuple.orders_pd)
"""
Explanation: 下面使用plot_candle_from_order接口可视化交易单,以及买入卖出点,如下:
End of explanation
"""
# 买入策略不变,卖出策略:利润保护止盈策略+风险下跌止损+较小的止盈位
sell_factors = [{'stop_loss_n': 0.5, 'stop_win_n': 0.5,
'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}]
abu_result_tuple, metrics = run_loo_back(us_choice_symbols, only_info=True)
"""
Explanation: 接下来使用较小的止盈位置则策略定性为均值回复策略,认为短线上涨为长线下跌的回复,买入后的期望是可以短时间内继续保持涨趋势,如下:
End of explanation
"""
abu_result_tuple, metrics = run_loo_back(cn_choice_symbols, only_info=True)
abu_result_tuple, metrics = run_loo_back(hk_choice_symbols, only_info=True)
"""
Explanation: 下面使用相同的策略回测A股市场与港股市场,如下:
End of explanation
"""
buy_factors = [{'class': AbuUpDownTrend}]
# 美股沙盒数据回测
abu_result_tuple_us, metrics = run_loo_back(us_choice_symbols, only_info=True)
# A股沙盒数据回测
abu_result_tuple_cn, metrics = run_loo_back(cn_choice_symbols, only_info=True)
# 港股沙盒数据回测
abu_result_tuple_hk, metrics = run_loo_back(hk_choice_symbols, only_info=True)
"""
Explanation: 从上面的回测结果可以发现买入信号发出的频率比‘直来直去’的海龟突破策略要少很多。
因为在上面的策略中通过使用非均衡技术进一步构建概率优势,代价就是形成了一个苛刻的买入策略,但量化交易最大的长处是通过计算机强大的运算能力,在广度上占有绝对优势,通过量化交易在短时间内完成对不同市场进行择时,更可以并行更多的苛刻策略来满足资金规模需求。
2. 长线趋势上涨与短线趋势下跌
可以使用各种周期趋势组合来完成不同的买入策略,与上述策略相反的情况是长线趋势上涨,短线趋势下跌:
寻找长线上涨的股票,比如一个季度(4个月)整体趋势为上涨趋势
短线走势下跌的股票,比如一个月整体趋势为下跌趋势
最后使用短线向下突破作为策略最终买入信号
abupy内置的AbuUpDownTrend策略为上述策略的代码实现, 关键策略代码如下:
def fit_day(self, today):
"""
长线周期选择目标为上升趋势的目标,短线寻找近期走势为向下趋势的目标进行买入,期望是持续之前长相的趋势
1. 通过past_today_kl获取长周期的金融时间序列,通过AbuTLine中的is_up_trend判断
长周期是否属于上涨趋势,
2. 今天收盘价为最近xd天内最低价格,且短线xd天的价格走势为下跌趋势
3. 满足1,2发出买入信号
:param today: 当前驱动的交易日金融时间序列数据
"""
long_kl = self.past_today_kl(today, self.past_factor * self.xd)
tl_long = AbuTLine(long_kl.close, 'long')
# 判断长周期是否属于上涨趋势
if tl_long.is_up_trend(up_deg_threshold=self.up_deg_threshold, show=False):
if today.close == self.xd_kl.close.min() and AbuTLine(
self.xd_kl.close, 'short').is_down_trend(down_deg_threshold=-self.up_deg_threshold, show=False):
# 今天收盘价为最近xd天内最低价格,且短线xd天的价格走势为下跌趋势
return self.buy_tomorrow()
AbuUpDownTrend的实现与AbuDownUpTrend的实现正好相反,但实际上AbuUpDownTrend的最终买入信号存在一定问题,下面先用AbuUpDownTrend做回测,如下:
End of explanation
"""
ABuMarketDrawing.plot_candle_from_order(abu_result_tuple_hk.orders_pd.tail(3))
"""
Explanation: 上面三个市场的回测结果表面上看似乎还不错,因为沙盒中的symbol数量不多,在abupy中有专门针对策略验证可行性的接口模块,在之后的章节会重点讲解示例。
在教程‘第15节 量化交易和搜索引擎’中强调过对交易结果进行人工分析是最常用且有效的手段,即直接可视化交易的买入卖出点及走势,发现策略的问题以及改善方法,下面可视化港股市场回测的几笔交易单如下:
End of explanation
"""
tl.golden.calc_golden(ABuSymbolPd.make_kl_df('usTSLA'))
"""
Explanation: 可自行尝试切换abu_result_tuple_cn或者abu_result_tuple_us查看A股,美股的具体交易买卖行为,从买入点分析可以发现大多数买入点可以归结为如下图所示类型,基本上属于上升趋势线破位点:
归结原因是因为最终的决策信号是通过今天收盘价为最近xd天内最低价格, 即一个向下突破做为买入信号,并不能说这个信号本身是不对的,只能说这个信号对于整体策略系统和之后的期望走势是不匹配的,比如策略具体的期望可以是:长线上涨中寻找短线下跌,且有反弹迹象:
具体的策略上可以使用多种方式实现,比如最终的信号发生使用均线上扬或者黄金分割反弹迹象。
abupy内置的AbuUpDownGolden策略使用黄金分割反弹迹象做为最终的信号发生,描述为:
寻找长线上涨的股票,比如一个季度(4个月)整体趋势为上涨趋势
短线走势下跌的股票,比如一个月整体趋势为下跌趋势
昨天收盘价在0.382下,今天收盘价格在0.382上作为策略最终买入信号
关键策略代码如下,更多请阅读AbuUpDownGolden类源代码:
def fit_day(self, today):
"""
长线周期选择目标为上升趋势的目标,短线寻找近期走势为向下趋势的目标进行买入,期望是持续之前长相的趋势
1. 通过past_today_kl获取长周期的金融时间序列,通过AbuTLine中的is_up_trend判断
长周期是否属于上涨趋势,
2. 昨天收盘价在0.382下,今天收盘价格在0.382上,且短线xd天的价格走势为下跌趋势
3. 满足1,2发出买入信号
:param today: 当前驱动的交易日金融时间序列数据
"""
long_kl = self.past_today_kl(today, self.past_factor * self.xd)
tl_long = AbuTLine(long_kl.close, 'long')
# 判断长周期是否属于上涨趋势
if tl_long.is_up_trend(up_deg_threshold=self.up_deg_threshold, show=False):
# calc_golden计算黄金分割+关键点位值
golden = calc_golden(self.xd_kl, show=False)
if today.pre_close < golden.below382 < today.close and AbuTLine(
self.xd_kl.close, 'short').is_down_trend(down_deg_threshold=-self.up_deg_threshold,
show=False):
# 昨天收盘价在0.382下,今天收盘价格在0.382上,且短线xd天的价格走势为下跌趋势
return self.buy_tomorrow()
上面策略代码中实现主要通过calc_golden计算了价格趋势的各个关键点位值,包括黄金分割带的值,如下示例计算了tsla一段时间的价格走势分割位:
End of explanation
"""
buy_factors = [{'class': AbuUpDownGolden}]
abu_result_tuple, metrics = run_loo_back(us_choice_symbols, only_info=True)
"""
Explanation: 备注:有很多人说不应该使用黄金分割线,认为黄金分割线没有理论支撑,实际上一直强调的类似求解方程组,如果所有的参数都是未知数,将无法解出答案,所以一定要把一些变量变成常数值,然后通过这些常数值来确定更多的变量,最终解出你所关心的解,黄金分割线值是很好的制造非均衡环境的常数阀值。
下面使用AbuUpDownGolden做为买入策略进行回测,如下:
End of explanation
"""
ABuMarketDrawing.plot_candle_from_order(abu_result_tuple.orders_pd.head(2))
"""
Explanation: 下面从交易结果单子中可视化买卖点,可以看到策略达成了在长线上涨,短线下跌,且有反弹迹象的时候买入:
End of explanation
"""
abu_result_tuple, metrics = run_loo_back(cn_choice_symbols, only_info=True)
"""
Explanation: 下面切换到a股市场进行回测,可以看到由于策略的苛刻条件导致交易数量进一步降低:
End of explanation
"""
|
kpolimis/paa_2017_social_media
|
Estimate_Facebook_Audience/notebooks/facebook_demographic_research.ipynb
|
mit
|
# uncomment the line below to view the functions in utils.py
#% cat utils.py
import os
import re
import sys
import csv
import json
import glob
import numpy as np
import pandas as pd
from datetime import datetime
import matplotlib.pyplot as plt
from collections import OrderedDict
from pysocialwatcher import watcherAPI
from utils import *
% matplotlib inline
"""
Explanation: Demographic Research with the Facebook Marketing API
Outline
Using Python in the terminal
Install pySocialWatcher
Import modules
Register for Facebook Marketing API access
Explore pySocialWatcher module
Use pySocialWatcher to analyze a specific behavior (new movers)
Plot total new movers in subset dataframe
Using Python in the terminal
open the terminal in Mac by pressing the spacebar and command to open spotlight
type terminal in the spotlight search
to open the terminal
navigate to the directory that you downloaded or cloned the workshop to with cd
e.g. cd tutorials/paa_2017_social_media will change to the paa_2017_social_media folder within the tutorials directory
open a jupyter notebook web browser with the command jupyter notebook
Install pySocialWatcher
pySocialWatcher is currently only compatible with Python2.7
if you downloaded Anaconda for Python3.x you can create a conda environment with Python2.7
first type conda create -n py27 python=2.7 in the terminal to create a Python2.7 environment named py27
then type source activate py27 to activate the py27 environment
setup notebook in py27 environment
conda install notebook ipykernel matplotlib
ipython kernel install --user
install pySocialWatcher with the following commands in your terminal
first, navigate into the pySocialWatcher in the Estimate_Facebook_Audience folder
then, type the following two commands in the terminal
~~~
pip install -r requirements.txt
python setup.py install
~~~
pySocialWatcher examples and documentation
quick tutorial on pySocialWatcher module
Import modules
import is a Python module
gives the current namespace access to the code in another module by the process of importing it
equavialent to library(package) in R
import documentation
import features
import specific functions from a module
example: from collections import OrderedDict
create an alias for a module names by using "as"
example: import pandas as pd imports the pandas module but assigns this module the name "pd" in the namespace
alias allows quicker access to functions:
example: instead of using pandas.read_csv() to read a .csv file can now type pd.read_csv()
use import to import custom code/functions from a Python .py file in the current directory
example: from utils import *
utils.py is a script with two functions for this module
End of explanation
"""
os.getcwd()
"""
Explanation: module documentation
standard library modules:
os
re
csv
sys
json
glob
datetime
collections
open source modules:
numpy
pandas
matplotlib
pySocialWatcher
Python classes and functions
modules allow for the creation of object classes and specific functions
a class is an object that has certain features
class documentation
using a function within a Python module
functions within a module are accessed by typing the name for the module in the namespace and then a period (.)
example: view current working directory (function) within the os module
os.getcwd()
similar to getwd() in R
End of explanation
"""
watcher = watcherAPI()
watcher.load_credentials_file("../credentials/facebook_credentials.csv")
watcher.check_tokens_account_valid()
"""
Explanation: Register for Facebook Marketing API access:
Go to https://developers.facebook.com/apps/
Click "Get Started" in the top right
Create a new app by clicking on "+ Add a New App" and follow the process minimally
In the app's dashboard page, click "+ Add product" in the toolbar on the left
Select "Marketing" Product Setup
Click "Get Access Token"
Click "Extend Access Token" button (at the bottom) (the button may not show up right away, just refresh the page)
The access token that shows up under the form is the one you want! It will look something like this:
EAAHMonBrQ5kBALhYvCnLRsNRyO9YJH8I6ZBKwby5cLrS4V2GZAC2GT2MbDHXjpwnfvkPB0ZBv7knZAP1wRzKQVZAxd7fAsQTDAVrE0H3hUffFZBuRlncX0cAMVj8sIKLN68U6BxDAogvIdluX8tAMK
Finally, you want to get your Ads Manager Account ID.
Go to https://www.facebook.com/business/ and click "Create Advert"
In URL, copy the part starting with act= such as:
https://www.facebook.com/ads/manager/creation/creation/?act=952863473440&pid=p1
Create facebook_credentials.csv
change directory to the credentials folder
create a .csv file named "facebook_credentials.csv"
paste your Facebook Access Token in the first row, first column
paste your Ads Manager Account ID in the first row, second column
Load Facebook credentials
watcherAPI() is a class within the pySocialWathcer module that makes requests with the Facebook Marketing API
watcherAPI() has multiple attributes including credentials
load_credentials_file is a function within the watcher class that allows users to upload a .csv of Facebook Marketing API credentials
check_tokens_account_valid is a function that sends a generic example request to the Facebook Marketing API to assess the validity of the credentials loaded from the load_credentials_file function
End of explanation
"""
watcherAPI.print_geo_locations_given_query_and_location_type("new", ["city"])
watcher.print_bad_joke()
"""
Explanation: Explore pySocialWatcher module
Can gather demographic and event parameters available on the Marketing API
Demographic and event targeting parameters
ALL search parameters, such as city name must map to a (numeric) key value known the the Marketing API
print_geo_locations_given_query_and_location_type
function to search for Facebook key of cities by name
e.g. search for cities that have "new" in their name
End of explanation
"""
watcher.print_behaviors_list()
"""
Explanation: view available "behaviors" parameters
can segment demographic groups by multiple parameters including "behaviors"
End of explanation
"""
US_states = pd.read_table("../data/US_states.csv", sep = ",", header=0)
US_states.head()
"""
Explanation: read in a .csv file with pandas module's read_table function
converts a .csv file into a dataframe
this .csv contains all 50 US states and their correspond Facebook Marketin API location key (FB_key)
End of explanation
"""
US_states.iloc[12]
"""
Explanation: View an entire row in a dataframe
End of explanation
"""
illinois_dic={"name": "IL_example", "geo_locations": [{"name": "regions",
"values": [{"key":str(US_states["FB_key"][12])}],
"location_types": ["home"]}],
"genders": [0,1,2],
"ages_ranges": [{"min":13, "max":65}]}
illinois_dic_ordered = OrderedDict(sorted(illinois_dic.items(),key=lambda t: len(t[0])))
illinois_dic_json=json.dumps(illinois_dic_ordered, indent = 4)
print illinois_dic_json
file_name = "IL_example.json"
with open('../data/%s' % file_name, 'w') as outfile:
outfile.write(json.dumps(illinois_dic_ordered, indent = 4))
"""
Explanation: Making API request with pySocialWatcher
watcher class has a run_data_collection function
run_data_collection takes a .json file as an input and creates an API request
output from run_data_collection is a pandas dataframe and .csv files
creating .json input files
the following code creates a dictionary and saves a .json file ("IL_example.json") in the data folder
we can convert Python dictionaries into .json files for the run_data_collection function
End of explanation
"""
watcher.run_data_collection("../data/%s" % file_name)
"""
Explanation: make a request to the Marketing API with the example dictionary created above
End of explanation
"""
illinois_dic_new_movers={"name": "IL_example", "geo_locations": [{"name": "regions",
"values": [{"key":str(US_states["FB_key"][12])}],
"location_types": ["home"]}],
"genders": [0,1,2],
"ages_ranges": [{"min":13, "max":65}],
"behavior":[{"or": [6029662272682],"name": "new_mover"}]}
illinois_dic_new_movers_ordered = OrderedDict(sorted(illinois_dic_new_movers.items(),key=lambda t: len(t[0])))
illinois_dic_new_movers_json=json.dumps(illinois_dic_new_movers_ordered, indent = 4)
print illinois_dic_new_movers_json
file_name = "IL_example_new_movers.json"
with open('../data/%s' % file_name, 'w') as outfile:
outfile.write(json.dumps(illinois_dic_new_movers_ordered, indent = 4))
"""
Explanation: Use pySocialWatcher to analyze a specific behavior
get data on new movers
6029662272682 | This Life Event segment contains consumers who are likely to be new movers in the last 6 months.| New mover
the following code creates a new dictionary that incorporates (new mover) behavior and saves a .json file ("IL_example_new_movers.json") in the data folder
End of explanation
"""
watcher.run_data_collection("../data/%s" % file_name)
"""
Explanation: make a request to the Marketing API with the new movers dictionary created above
End of explanation
"""
state_subset = US_states.loc[US_states['Abbreviation'].isin(["CA","NY", "TX", "IL"])]
# reindex state_subset
state_subset.index = [0,1,2,3]
state_subset
"""
Explanation: Comparing states
use pandas to subset US_states dataframe
End of explanation
"""
for state in range(0, len(state_subset["State Name"])):
if not os.path.exists('../data/new_movers_by_state/%s' % state_subset["State Name"][state]):
os.makedirs('../data/new_movers_by_state/%s' % state_subset["State Name"][state])
"""
Explanation: loop through subset dataframe to create folders
End of explanation
"""
UN_age_min = [13,15, 20, 25, 30, 35, 40, 45, 50, 55, 60]
UN_age_max = [65,19, 24, 29, 34, 39, 44, 49, 54, 59, 65]
UN_age_table = {'age_min': UN_age_min, 'age_max': UN_age_max}
UN_age_table_df = pd.DataFrame(data=UN_age_table, index=None)
UN_age_table_df = UN_age_table_df[["age_min", "age_max"]]
UN_age_table_df.to_csv("../data/UN_age_table.csv", index=None)
"""
Explanation: create (UN) age groups, another parameter that can be iterated through during API requests
End of explanation
"""
ctr = 0
for state in range(0, len(state_subset["State Name"])):
state_dic={"name": str(state_subset["State Name"][state])+"_new_movers",
"geo_locations": [{"name": "regions", "values": [{"key":str(US_states["FB_key"][state])}],
"location_types": ["home"]}],
"genders": [0,1,2],
"ages_ranges": [{"min":13, "max":65}],
"behavior":[{"or": [6029662272682],"name": "new_movers"}]}
state_dic["geo_locations"][0]["values"][0]['key'] = str(state_subset["FB_key"][state])
state_dic_ordered = OrderedDict(sorted(state_dic.items(),key=lambda t: len(t[0])))
state_dic_json=json.dumps(state_dic_ordered, indent = 4)
gender_dict = {'0':'female_male_total_pop'}
file_name = str(state_subset["State Name"][state])+"_new_movers"+".json"
state_folder = state_subset["State Name"][state]
if not os.path.exists('../data/new_movers_by_state/%s/api_requests_json' % state_folder):
os.makedirs('../data/new_movers_by_state/%s/api_requests_json' % state_folder)
if not os.path.exists('../data/new_movers_by_state/%s/api_requests_csv' % state_folder):
os.makedirs('../data/new_movers_by_state/%s/api_requests_csv' % state_folder)
with open('../data/new_movers_by_state/%s/api_requests_json/%s' % (state_folder,file_name), 'w') as outfile:
outfile.write(json.dumps(state_dic_ordered, indent = 4))
state_api_request = watcher.run_data_collection('../data/new_movers_by_state/%s/api_requests_json/%s' % (state_folder,file_name))
state_api_request
state_api_request.insert(0, "state",state_subset["State Name"][state])
csv_filename = file_name.split(".")[0]+".csv"
state_api_request.to_csv("../data/new_movers_by_state/%s/api_requests_csv/%s" % (state_folder,csv_filename),
index=False)
ctr = ctr +1
total =len(state_subset["State Name"])
print("file %.f of %.f " % (ctr,total))
print(file_name)
%rm collect_finished_* dataframe_collecting_* dataframe_skeleton_*
"""
Explanation: loop through each state in the subset dataframe and create .json files for estimating new movers
End of explanation
"""
merge_subdirectories('../data/new_movers_by_state/',
'raw_new_movers_data.csv')
"""
Explanation: use merge_subdirectories function from utils to merge all folders directory of new movers by state into one .csv file
End of explanation
"""
raw_new_movers_data = pd.read_csv("../data/raw_new_movers_data.csv")
raw_new_movers_data.head()
print("the data has %s rows and %s columns" % (raw_new_movers_data.shape[0], raw_new_movers_data.shape[1]))
print(raw_new_movers_data.dtypes)
"""
Explanation: read in merged .csv file of new movers by state
End of explanation
"""
merged_new_movers_data = process_facebook_data(raw_new_movers_data)
merged_new_movers_data.head()
"""
Explanation: use process_facebook_data function from utils to munge dataframe of merged movers
End of explanation
"""
merged_new_movers_data_genders = merged_new_movers_data.groupby("genders")
merged_new_movers_data_genders["audience"].sum()
"""
Explanation: use pandas group_by function to group data by a column
End of explanation
"""
new_movers_total_pop = merged_new_movers_data.loc[merged_new_movers_data["genders"]=="total population"][["state", "audience"]]
new_movers_total_pop
new_movers_audience = new_movers_total_pop["audience"]
new_movers_audience
labels = [i for i in state_subset["Abbreviation"]]
labels
new_movers_audience.mean()
plt.subplots(1, figsize=(12, 8))
plt.subplots_adjust(bottom = 0.1)
plt.scatter(np.arange(len(new_movers_audience)), new_movers_audience)
for label, x, y in zip(labels,np.arange(len(new_movers_audience)), new_movers_audience):
plt.annotate(
label,
xy=(x, y), xytext=(-20, 20),
textcoords='offset points', ha='right', va='bottom',
bbox=dict(boxstyle='round,pad=0.5', fc='yellow', alpha=0.5),
arrowprops=dict(arrowstyle = '->', connectionstyle='arc3,rad=0'))
plt.xticks([])
plt.axhline(new_movers_audience.mean(), linestyle='--', color='red', alpha=0.3)
plt.title("Total Population of New Movers by State")
plt.xlabel("State")
plt.ylabel("Total New Movers")
plt.show()
import matplotlib
print("System and module version information: \n")
print('Python version:', sys.version_info)
print('numpy version:', np.__version__)
print('pandas version:', pd.__version__)
print('matplotlib version:', matplotlib.__version__)
"""
Explanation: Plot total new movers in subset dataframe
create a new dataframe that is a the total population audience estimate of new movers for each state and the state name
create a dataframe that is just the state population estimate
create labels
plot the population estimates and labels
End of explanation
"""
|
astarostin/MachineLearningSpecializationCoursera
|
course4/week2 - Двухвыборочные непараметрические критерии (связанные выборки) - demo.ipynb
|
apache-2.0
|
import numpy as np
import pandas as pd
import itertools
from scipy import stats
from statsmodels.stats.descriptivestats import sign_test
from statsmodels.stats.weightstats import zconfint
%pylab inline
"""
Explanation: Непараметрические критерии
Критерий | Одновыборочный | Двухвыборочный | Двухвыборочный (связанные выборки)
------------- | -------------|
Знаков | $\times$ | | $\times$
Ранговый | $\times$ | $\times$ | $\times$
Перестановочный | $\times$ | $\times$ | $\times$
Терапия при анорексии
В исследовании оценивается эффективность поведенческой терапии для лечения анорексии. Для 50 пациентов известен вес до начала терапии и по её окончании. Была ли терапия эффективной?
End of explanation
"""
weight_data = pd.read_csv('weight.txt', sep = '\t', header = 0)
weight_data.head()
pylab.figure(figsize=(12,4))
pylab.subplot(1,2,1)
pylab.grid()
pylab.hist(weight_data.Before, color = 'r')
pylab.xlabel('Before')
pylab.subplot(1,2,2)
pylab.grid()
pylab.hist(weight_data.After, color = 'b')
pylab.xlabel('After')
pylab.show()
weight_data.describe()
"""
Explanation: Загрузка данных
End of explanation
"""
print '95%% confidence interval for mean weight before therapy: [%f, %f]' % zconfint(weight_data.Before)
print '95%% confidence interval for mean weight after therapy: [%f, %f]' % zconfint(weight_data.After)
pylab.hist(weight_data.After - weight_data.Before)
pylab.show()
"""
Explanation: Двухвыборочные критерии для связных выборок
$H_0\colon$ медианы веса до и после терапии совпадает
$H_1\colon$ медианы веса до и после тепрапии отличаются
End of explanation
"""
print "M: %d, p-value: %f" % sign_test(weight_data.After - weight_data.Before)
"""
Explanation: Критерий знаков
$H_0\colon P\left(X_1>X_2\right)=\frac1{2},$
$H_1\colon P\left(X_1>X_2\right)\neq\frac1{2}$
End of explanation
"""
stats.wilcoxon(weight_data.After, weight_data.Before)
stats.wilcoxon(weight_data.After - weight_data.Before)
"""
Explanation: Критерий знаковых рангов Уилкоксона
$H_0\colon med\left(X_1-X_2\right)=0,$
$H_1\colon med\left(X_1-X_2\right)\neq0$
End of explanation
"""
def permutation_t_stat_1sample(sample, mean):
t_stat = sum(map(lambda x: x - mean, sample))
return t_stat
def permutation_zero_distr_1sample(sample, mean, max_permutations = None):
centered_sample = map(lambda x: x - mean, sample)
if max_permutations:
signs_array = set([tuple(x) for x in 2 * np.random.randint(2, size = (max_permutations,
len(sample))) - 1 ])
else:
signs_array = itertools.product([-1, 1], repeat = len(sample))
distr = [sum(centered_sample * np.array(signs)) for signs in signs_array]
return distr
pylab.hist(permutation_zero_distr_1sample(weight_data.After - weight_data.Before, 0.,
max_permutations = 10000))
pylab.show()
def permutation_test(sample, mean, max_permutations = None, alternative = 'two-sided'):
if alternative not in ('two-sided', 'less', 'greater'):
raise ValueError("alternative not recognized\n"
"should be 'two-sided', 'less' or 'greater'")
t_stat = permutation_t_stat_1sample(sample, mean)
zero_distr = permutation_zero_distr_1sample(sample, mean, max_permutations)
if alternative == 'two-sided':
return sum([1. if abs(x) >= abs(t_stat) else 0. for x in zero_distr]) / len(zero_distr)
if alternative == 'less':
return sum([1. if x <= t_stat else 0. for x in zero_distr]) / len(zero_distr)
if alternative == 'greater':
return sum([1. if x >= t_stat else 0. for x in zero_distr]) / len(zero_distr)
print "p-value: %f" % permutation_test(weight_data.After - weight_data.Before, 0.,
max_permutations = 1000)
print "p-value: %f" % permutation_test(weight_data.After - weight_data.Before, 0.,
max_permutations = 50000)
"""
Explanation: Перестановочный критерий
$H_0\colon \mathbb{E}(X_1 - X_2) = 0$
$H_1\colon \mathbb{E}(X_1 - X_2) \neq 0$
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.16/_downloads/plot_metadata_epochs.ipynb
|
bsd-3-clause
|
# Authors: Chris Holdgraf <choldgraf@gmail.com>
# Jona Sassenhagen <jona.sassenhagen@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import mne
import numpy as np
import matplotlib.pyplot as plt
# Load the data from the internet
path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'
epochs = mne.read_epochs(path)
# The metadata exists as a Pandas DataFrame
print(epochs.metadata.head(10))
"""
Explanation: Pandas querying and metadata with Epochs objects
Demonstrating pandas-style string querying with Epochs metadata.
For related uses of :class:mne.Epochs, see the starting tutorial
sphx_glr_auto_tutorials_plot_object_epochs.py.
Sometimes you may have a complex trial structure that cannot be easily
summarized as a set of unique integers. In this case, it may be useful to use
the metadata attribute of :class:mne.Epochs objects. This must be a
:class:pandas.DataFrame where each row corresponds to an epoch, and each
column corresponds to a metadata attribute of each epoch. Columns must
contain either strings, ints, or floats.
In this dataset, subjects were presented with individual words
on a screen, and the EEG activity in response to each word was recorded.
We know which word was displayed in each epoch, as well as
extra information about the word (e.g., word frequency).
Loading the data
First we'll load the data. If metadata exists for an :class:mne.Epochs
fif file, it will automatically be loaded in the metadata attribute.
End of explanation
"""
av1 = epochs['Concreteness < 5 and WordFrequency < 2'].average()
av2 = epochs['Concreteness > 5 and WordFrequency > 2'].average()
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
av1.plot_joint(show=False, **joint_kwargs)
av2.plot_joint(show=False, **joint_kwargs)
"""
Explanation: We can use this metadata attribute to select subsets of Epochs. This
uses the Pandas :meth:pandas.DataFrame.query method under the hood.
Any valid query string will work. Below we'll make two plots to compare
between them:
End of explanation
"""
words = ['film', 'cent', 'shot', 'cold', 'main']
epochs['WORD in {}'.format(words)].plot_image(show=False)
"""
Explanation: Next we'll choose a subset of words to keep.
End of explanation
"""
epochs['cent'].average().plot(show=False, time_unit='s')
"""
Explanation: Note that traditional epochs sub-selection still works. The traditional
MNE methods for selecting epochs will supersede the rich metadata querying.
End of explanation
"""
# Create two new metadata columns
metadata = epochs.metadata
is_concrete = metadata["Concreteness"] > metadata["Concreteness"].median()
metadata["is_concrete"] = np.where(is_concrete, 'Concrete', 'Abstract')
is_long = metadata["NumberOfLetters"] > 5
metadata["is_long"] = np.where(is_long, 'Long', 'Short')
epochs.metadata = metadata
"""
Explanation: Below we'll show a more involved example that leverages the metadata
of each epoch. We'll create a new column in our metadata object and use
it to generate averages for many subsets of trials.
End of explanation
"""
query = "is_long == '{0}' & is_concrete == '{1}'"
evokeds = dict()
for concreteness in ("Concrete", "Abstract"):
for length in ("Long", "Short"):
subset = epochs[query.format(length, concreteness)]
evokeds["/".join((concreteness, length))] = list(subset.iter_evoked())
# For the actual visualisation, we store a number of shared parameters.
style_plot = dict(
colors={"Long": "Crimson", "Short": "Cornflowerblue"},
linestyles={"Concrete": "-", "Abstract": ":"},
split_legend=True,
ci=.68,
show_sensors='lower right',
show_legend='lower left',
truncate_yaxis="max_ticks",
picks=epochs.ch_names.index("Pz"),
)
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
"""
Explanation: Now we can quickly extract (and plot) subsets of the data. For example, to
look at words split by word length and concreteness:
End of explanation
"""
letters = epochs.metadata["NumberOfLetters"].unique().astype(int).astype(str)
evokeds = dict()
for n_letters in letters:
evokeds[n_letters] = epochs["NumberOfLetters == " + n_letters].average()
style_plot["colors"] = {n_letters: int(n_letters)
for n_letters in letters}
style_plot["cmap"] = ("# of Letters", "viridis_r")
del style_plot['linestyles']
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
"""
Explanation: To compare words which are 4, 5, 6, 7 or 8 letters long:
End of explanation
"""
evokeds = dict()
query = "is_concrete == '{0}' & NumberOfLetters == {1}"
for concreteness in ("Concrete", "Abstract"):
for n_letters in letters:
subset = epochs[query.format(concreteness, n_letters)]
evokeds["/".join((concreteness, n_letters))] = subset.average()
style_plot["linestyles"] = {"Concrete": "-", "Abstract": ":"}
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
"""
Explanation: And finally, for the interaction between concreteness and continuous length
in letters:
End of explanation
"""
data = epochs.get_data()
metadata = epochs.metadata.copy()
epochs_new = mne.EpochsArray(data, epochs.info, metadata=metadata)
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Creating an :class:`mne.Epochs` object with metadata is done by passing
a :class:`pandas.DataFrame` to the ``metadata`` kwarg as follows:</p></div>
End of explanation
"""
|
softEcon/course
|
lectures/basics/version_control/lecture.ipynb
|
mit
|
import os
try:
os.mkdir('me')
except OSError:
pass
os.chdir('me')
"""
Explanation: Version Control and Error Tracking
This tutorial is an showcasing the material collected in the Pro Git book which is avilable for free online. I also draw on a set of excellent Scientific Python Lecture Notes maintained by Robert Johansson.
Roadmap
Basics
Tutorials
Git
GitHub
Definition: Version Control is a system that records changes to a file or set of files over time so that you can recall specific versions later.
Implementations
* Local Version Control System
* Centralized Version Control System
* Distributed Version Control System
Local Version Control System
<img src="images/local.png">
Centralized Version Control System
<img src="images/centralized.png">
Distributed Version Control Systems
<img src="images/distributed.png" >
Tools
Git
GitHub
<img src="images/git.jpg">
<img src="images/github.png" >
Let us get started!
End of explanation
"""
%%bash
echo 'How does our directory look like?'
ls -al
git init
echo 'How does our directory look like now?'
ls -al
git status
# Let us create files for tracking.
echo 'My Project' > README
echo 'peisenha' > CONTRIBUTING.md
# Set up our identity
git config --global user.name "Philipp Eisenhauer"
git config --global user.email "eisenhauer@policy-lab.org"
"""
Explanation: Setting Up
End of explanation
"""
%%bash
git add README
git status
%%bash
git add CONTRIBUTING.md
git commit -a -m'initial'
%%bash
echo 'Philipp Eisenhauer' > CONTRIBUTING.md
%%bash
git status
%%bash
git commit -a -m'changes to contributers'
"""
Explanation: Recording Changes to the Repository
<img src="images/lifecycle.png">
End of explanation
"""
%%bash
# Remove files
git rm README
git status
%%bash
# Move files
git mv CONTRIBUTING.md CONTRIBUTORS.md
git status
%%bash
git commit -m'reorganizing files'
"""
Explanation: Removing and Moving File
End of explanation
"""
%%bash
git log
"""
Explanation: Viewing the Commit History
End of explanation
"""
%%bash
git remote add origin git@github.com:peisenha/try_git.git
git push -u origin master
%%bash
git clone git@github.com:peisenha/try_git.git ../other
"""
Explanation: Adding a Remote Repository
Let us create a repository called try_git in our GitHub accounts
End of explanation
"""
%%bash
git clone git@github.com:softEcon/course.git /home/vagrant/course_material
"""
Explanation: While we are at it, let us check out the class repository.
End of explanation
"""
%%bash
# Create tag
git tag -a v1.0 -m'ouf first tag annotated tag'
%%bash
# List tag
git tag -l
%%bash
# Push to remote repository
git commit -a m'added first tag'
git push origin --tags
"""
Explanation: From now on, you will find all class material in this repository.
Tracking Changes
GitHub allows to visualize changes to files over time.
<img src="images/changes.png">
Tagging
Types:
* Lightweight, pointer to specific commit
* Annotated, lightweight tag with additional information
I use annotated tags to release different versions of my research software. You can check it out an example here.
End of explanation
"""
%%bash
git branch testing
%%bash
git branch -a
%%bash
git log --oneline --decorate
"""
Explanation: Let us take a look at our repository in a repository viewer. We choose gitg and can install it using the Ubuntu Software Center.
Git Branching
A Commit and its Tree
Objects
* Commit
* Tree
* Blobs
<img src="images/commit-and-tree.png">
Commits and their Parents
<img src="images/commits-and-parents.png">
A Branch and its Commit History
A branch is a lightweight movable pointer to one of the commits. The HEAD is an additional pointer to the local branch you are currently working on.
<img src="images/branch-and-history.png">
Creating a New Branch
I ususally create a branch for each feature I would like to add to my research software. Once I satisfied with my implementation I merge the development branch back into master.
End of explanation
"""
%%bash
git checkout testing
echo 'Some information on the project.' > README
git add README
git commit -a -m'adding project information'
"""
Explanation: Switching between Branches
Sometimes I will work on the integration and testing of two new features at the same time. Or, even better, somebody else is adding a feature at the same time. In both cases, I can easily switch between branches.
End of explanation
"""
%%bash
git checkout master
%%bash
git checkout testing
"""
Explanation: Let us check how the content of the directory change as we move between the different branches.
End of explanation
"""
%%bash
git checkout master
echo 'Some other information on the project.' > README
git add README
git commit -a -m'adding other project information'
%%bash
git merge master testing
%%bash
git add README
git commit -m'fixed merge conflict'
"""
Explanation: Basic Merging
At some point I am hopefully done with my code improvement and it is time to merge my development branch into master.
End of explanation
"""
%%bash
git branch -d testing
git branch -a
"""
Explanation: Let us clean up our branches by deleting testing and keep our repository in good standing.
End of explanation
"""
import shutil
os.chdir('../')
for dir_ in ['me', '.ipynb_checkpoints', 'other']:
try:
shutil.rmtree(dir_)
except OSError:
pass
"""
Explanation: Summary
The use of a version control system allows us to improve our economic research in a variety of ways. In my case, I use Git for all my software projects. The debugging process is speed up considerably as I can track back in time and pin down when I introduced a new error. GitHub allows me to easily collaborate with other researchers (or research assistants) and promote my work to a broader audience.
Cleanup
End of explanation
"""
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1K5apRH').read())
"""
Explanation: Miscellaneous
You find some additional resources on version control here.
Formatting
End of explanation
"""
|
uber/pyro
|
tutorial/source/air.ipynb
|
apache-2.0
|
%pylab inline
import os
from collections import namedtuple
import pyro
import pyro.optim as optim
from pyro.infer import SVI, TraceGraph_ELBO
import pyro.distributions as dist
import pyro.poutine as poutine
import pyro.contrib.examples.multi_mnist as multi_mnist
import torch
import torch.nn as nn
from torch.nn.functional import relu, sigmoid, softplus, grid_sample, affine_grid
import numpy as np
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.7.0')
"""
Explanation: Attend Infer Repeat
In this tutorial we will implement the model and inference strategy described in "Attend, Infer, Repeat:
Fast Scene Understanding with Generative Models" (AIR) [1] and apply it to the multi-mnist dataset.
A standalone implementation is also available.
End of explanation
"""
inpath = '../../examples/air/.data'
X_np, _ = multi_mnist.load(inpath)
X_np = X_np.astype(np.float32)
X_np /= 255.0
mnist = torch.from_numpy(X_np)
def show_images(imgs):
figure(figsize=(8, 2))
for i, img in enumerate(imgs):
subplot(1, len(imgs), i + 1)
axis('off')
imshow(img.data.numpy(), cmap='gray')
show_images(mnist[9:14])
"""
Explanation: Introduction
The model described in [1] is a generative model of scenes. In this tutorial we will use it to model images from a dataset that is similar to the multi-mnist dataset in [1]. Here are some data points from this data set:
End of explanation
"""
# Create the neural network. This takes a latent code, z_what, to pixel intensities.
class Decoder(nn.Module):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(50, 200)
self.l2 = nn.Linear(200, 400)
def forward(self, z_what):
h = relu(self.l1(z_what))
return sigmoid(self.l2(h))
decode = Decoder()
z_where_prior_loc = torch.tensor([3., 0., 0.])
z_where_prior_scale = torch.tensor([0.1, 1., 1.])
z_what_prior_loc = torch.zeros(50)
z_what_prior_scale = torch.ones(50)
def prior_step_sketch(t):
# Sample object pose. This is a 3-dimensional vector representing x,y position and size.
z_where = pyro.sample('z_where_{}'.format(t),
dist.Normal(z_where_prior_loc.expand(1, -1),
z_where_prior_scale.expand(1, -1))
.to_event(1))
# Sample object code. This is a 50-dimensional vector.
z_what = pyro.sample('z_what_{}'.format(t),
dist.Normal(z_what_prior_loc.expand(1, -1),
z_what_prior_scale.expand(1, -1))
.to_event(1))
# Map code to pixel space using the neural network.
y_att = decode(z_what)
# Position/scale object within larger image.
y = object_to_image(z_where, y_att)
return y
"""
Explanation: To get an idea where we're heading, we first give a brief overview of the model and the approach we'll take to inference. We'll follow the naming conventions used in [1] as closely as possible.
AIR decomposes the process of generating an image into discrete steps, each of which generates only part of the image. More specifically, at each step the model will generate a small image (y_att) by passing a latent "code" variable (z_what) through a neural network. We'll refer to these small images as "objects". In the case of AIR applied to the multi-mnist dataset we expect each of these objects to represent a single digit. The model also includes uncertainty about the location and size of each object. We'll describe an object's location and size as its "pose" (z_where). To produce the final image, each object will first be located within a larger image (y) using the pose infomation z_where. Finally, the ys from all time steps will be combined additively to produce the final image x.
Here's a picture (reproduced from [1]) that shows two steps of this process:
Inference is performed in this model using amortized stochastic variational inference (SVI). The parameters of the neural network are also optimized during inference. Performing inference in such rich models is always difficult, but the presence of discrete choices (the number of steps in this case) makes inference in this model particularly tricky. For this reason the authors use a technique called data dependent baselines to achieve good performance. This technique can be implemented in Pyro, and we'll see how later in the tutorial.
Model
Generating a single object
Let's look at the model more closely. At the core of the model is the generative process for a single object. Recall that:
At each step a single object is generated.
Each object is generated by passing its latent code through a neural network.
We maintain uncertainty about the latent code used to generate each object, as well as its pose.
This can be expressed in Pyro like so:
End of explanation
"""
def expand_z_where(z_where):
# Takes 3-dimensional vectors, and massages them into 2x3 matrices with elements like so:
# [s,x,y] -> [[s,0,x],
# [0,s,y]]
n = z_where.size(0)
expansion_indices = torch.LongTensor([1, 0, 2, 0, 1, 3])
out = torch.cat((torch.zeros([1, 1]).expand(n, 1), z_where), 1)
return torch.index_select(out, 1, expansion_indices).view(n, 2, 3)
def object_to_image(z_where, obj):
n = obj.size(0)
theta = expand_z_where(z_where)
grid = affine_grid(theta, torch.Size((n, 1, 50, 50)))
out = grid_sample(obj.view(n, 1, 20, 20), grid)
return out.view(n, 50, 50)
"""
Explanation: Hopefully the use of pyro.sample and PyTorch networks within a model seem familiar at this point. If not you might want to review the VAE tutorial. One thing to note is that we include the current step t in the name passed to pyro.sample to ensure that names are unique across steps.
The object_to_image function is specific to this model and warrants further attention. Recall that the neural network (decode here) will output a small image, and that we would like to add this to the output image after performing any translation and scaling required to achieve the pose (location and size) described by z_where. It's not clear how to do this, and in particular it's not obvious that this can be implemented in a way that preserves the differentiability of our model, which we require in order to perform SVI. However, it turns out we can do this this using a spatial transformer network (STN) [2].
Happily for us, PyTorch makes it easy to implement a STN using its grid_sample and affine_grid functions. object_to_image is a simple function that calls these, doing a little extra work to massage z_where into the expected format.
End of explanation
"""
pyro.set_rng_seed(0)
samples = [prior_step_sketch(0)[0] for _ in range(5)]
show_images(samples)
"""
Explanation: A discussion of the details of the STN is beyond the scope of this tutorial. For our purposes however, it suffices to keep in mind that object_to_image takes the small image generated by the neural network and places it within a larger image with the desired pose.
Let's visualize the results of calling prior_step_sketch a few times to clarify this:
End of explanation
"""
pyro.set_rng_seed(0)
def geom(num_trials=0):
p = torch.tensor([0.5])
x = pyro.sample('x{}'.format(num_trials), dist.Bernoulli(p))
if x[0] == 1:
return num_trials
else:
return geom(num_trials + 1)
# Generate some samples.
for _ in range(5):
print('sampled {}'.format(geom()))
"""
Explanation: Generating an entire image
Having completed the implementation of a single step, we next consider how we can use this to generate an entire image. Recall that we would like to maintain uncertainty over the number of steps used to generate each data point. One choice we could make for the prior over the number of steps is the geometric distribution, which can be expressed as follows:
End of explanation
"""
def geom_prior(x, step=0):
p = torch.tensor([0.5])
i = pyro.sample('i{}'.format(step), dist.Bernoulli(p))
if i[0] == 1:
return x
else:
x = x + prior_step_sketch(step)
return geom_prior(x, step + 1)
"""
Explanation: This is a direct translation of the definition of the geometric distribution as the number of failures before a success in a series of Bernoulli trials. Here we express this as a recursive function that passes around a counter representing the number of trials made, num_trials. This function samples from the Bernoulli and returns num_trials if x == 1 (which represents success), otherwise it makes a recursive call, incrementing the counter.
The use of a geometric prior is appealing because it does not bound the number of steps the model can use a priori. It's also convenient, because by extending geometric to generate an object before each recursive call, we turn this from a geometric distribution over counts to a distribution over images with a geometrically distributed number of steps.
End of explanation
"""
pyro.set_rng_seed(4)
x_empty = torch.zeros(1, 50, 50)
samples = [geom_prior(x_empty)[0] for _ in range(5)]
show_images(samples)
"""
Explanation: Let's visualize some samples from this distribution:
End of explanation
"""
def prior_step(n, t, prev_x, prev_z_pres):
# Sample variable indicating whether to add this object to the output.
# We multiply the success probability of 0.5 by the value sampled for this
# choice in the previous step. By doing so we add objects to the output until
# the first 0 is sampled, after which we add no further objects.
z_pres = pyro.sample('z_pres_{}'.format(t),
dist.Bernoulli(0.5 * prev_z_pres)
.to_event(1))
z_where = pyro.sample('z_where_{}'.format(t),
dist.Normal(z_where_prior_loc.expand(n, -1),
z_where_prior_scale.expand(n, -1))
.mask(z_pres)
.to_event(1))
z_what = pyro.sample('z_what_{}'.format(t),
dist.Normal(z_what_prior_loc.expand(n, -1),
z_what_prior_scale.expand(n, -1))
.mask(z_pres)
.to_event(1))
y_att = decode(z_what)
y = object_to_image(z_where, y_att)
# Combine the image generated at this step with the image so far.
x = prev_x + y * z_pres.view(-1, 1, 1)
return x, z_pres
"""
Explanation: Aside: Vectorized mini-batches
In our final implementation we would like to generate a mini batch of samples in parallel for efficiency. While Pyro supports vectorized mini batches with plate, it currently requires that each sample statement within plate makes a choice for all samples in the mini batch. Another way to say this is that each sample in the mini batch will encounter the same set of sample statements. This is problematic for us, because as we've just seen, samples can make differing numbers of choices under our model.
One way around this is to have all samples take the same number of steps, but to nullify (so far as is possible) the effect of the superfuous random choices made after the sample is conceptually "complete". We'll say that a sample is "complete" once a zero is sampled from the Bernoulli random choice, and prior to that we'll say that a sample is "active".
The first part of this is straight forward. Following [1] we choose to take a fixed number of steps for each sample. (By doing so we no longer specify a geometric distribution over the number of steps, since the number of steps is now bounded. It would be interesting to explore the alternative of having each sample in the batch take steps until a successful Bernoulli trial has occured in each, as this would retain the geometric prior.)
To address the second part we will take the following steps:
Only add objects to the output while a sample is active.
Set the log probability of random choices made by complete samples to zero. (Since the SVI loss is a weighted sum of log probabilities, setting a choice's log probability to zero effectively removes its contribution to the loss.) This is achieved using the mask() method of distributions.
(Looking ahead, we'll need to take similar measures when we implement the guide and add baselines later in this tutorial.)
Of course, one thing we can't undo is the work done in performing unncessary sampling. Nevertheless, even though this approach performs redundant computation, the gains from using mini batches are so large that this is still a win overall.
Here's an updated model step function that implements these ideas. In summary, the changes from prior_step_sketch are:
We've added a new parameter n that specifies the size of the mini batch.
We now conditionally add the object to the output image based on a value sampled from a Bernoulli distribution.
We use mask() to zero out the log probability of random choices made by complete samples.
End of explanation
"""
def prior(n):
x = torch.zeros(n, 50, 50)
z_pres = torch.ones(n, 1)
for t in range(3):
x, z_pres = prior_step(n, t, x, z_pres)
return x
"""
Explanation: By iterating this step function we can produce an entire image, composed of multiple objects. Since each image in the multi-mnist dataset contains zero, one or two digits we will allow the model to use up to (and including) three steps. In this way we ensure that inference has to avoid using one or more steps in order to correctly count the number of objects in the input.
End of explanation
"""
pyro.set_rng_seed(121)
show_images(prior(5))
"""
Explanation: We have now fully specified the prior for our model. Let's visualize some samples to get a feel for this distribution:
End of explanation
"""
def model(data):
# Register network for optimization.
pyro.module("decode", decode)
with pyro.plate('data', data.size(0)) as indices:
batch = data[indices]
x = prior(batch.size(0)).view(-1, 50 * 50)
sd = (0.3 * torch.ones(1)).expand_as(x)
pyro.sample('obs', dist.Normal(x, sd).to_event(1),
obs=batch)
"""
Explanation: Specifying the likelihood
The last thing we need in order to complete the specification of the model is a likelihood function. Following [1] we will use a Gaussian likelihood with a fixed standard deviation of 0.3. This is straight forward to implement with pyro.sample using the obs argument.
When we later come to perform inference we will find it convenient to package the prior and likelihood into a single function. This is also a convenient place to introduce plate, which we use to implement data subsampling, and to register the networks we would like to optimize with pyro.module.
End of explanation
"""
def guide_step_basic(t, data, prev):
# The RNN takes the images and choices from the previous step as input.
rnn_input = torch.cat((data, prev.z_where, prev.z_what, prev.z_pres), 1)
h, c = rnn(rnn_input, (prev.h, prev.c))
# Compute parameters for all choices made this step, by passing
# the RNN hidden start through another neural network.
z_pres_p, z_where_loc, z_where_scale, z_what_loc, z_what_scale = predict_basic(h)
z_pres = pyro.sample('z_pres_{}'.format(t),
dist.Bernoulli(z_pres_p * prev.z_pres))
z_where = pyro.sample('z_where_{}'.format(t),
dist.Normal(z_where_loc, z_where_scale))
z_what = pyro.sample('z_what_{}'.format(t),
dist.Normal(z_what_loc, z_what_scale))
return # values for next step
"""
Explanation: Guide
Following [1] we will perform amortized stochastic variational inference in this model. Pyro provides general purpose machinery that implements most of this inference strategy, but as we have seen in earlier tutorials we are required to provide a model specific guide. What we call a guide in Pyro is exactly the entity called the "inference network" in the paper.
We will structure the guide around a recurrent network to allow the guide to capture (some of) the dependencies we expect to be present in the true posterior. At each step the recurrent network will generate the parameters for the choices made within the step. The values sampled will be fed back into the recurrent network so that this information can be used when computing the parameters for the next step. The guide for the Deep Markov Model shares a similar structure.
As in the model, the core of the guide is the logic for a single step. Here's a sketch of an implementation of this:
End of explanation
"""
rnn = nn.LSTMCell(2554, 256)
# Takes pixel intensities of the attention window to parameters (mean,
# standard deviation) of the distribution over the latent code,
# z_what.
class Encoder(nn.Module):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(400, 200)
self.l2 = nn.Linear(200, 100)
def forward(self, data):
h = relu(self.l1(data))
a = self.l2(h)
return a[:, 0:50], softplus(a[:, 50:])
encode = Encoder()
# Takes the guide RNN hidden state to parameters of
# the guide distributions over z_where and z_pres.
class Predict(nn.Module):
def __init__(self, ):
super().__init__()
self.l = nn.Linear(256, 7)
def forward(self, h):
a = self.l(h)
z_pres_p = sigmoid(a[:, 0:1]) # Squish to [0,1]
z_where_loc = a[:, 1:4]
z_where_scale = softplus(a[:, 4:]) # Squish to >0
return z_pres_p, z_where_loc, z_where_scale
predict = Predict()
def guide_step_improved(t, data, prev):
rnn_input = torch.cat((data, prev.z_where, prev.z_what, prev.z_pres), 1)
h, c = rnn(rnn_input, (prev.h, prev.c))
z_pres_p, z_where_loc, z_where_scale = predict(h)
z_pres = pyro.sample('z_pres_{}'.format(t),
dist.Bernoulli(z_pres_p * prev.z_pres)
.to_event(1))
z_where = pyro.sample('z_where_{}'.format(t),
dist.Normal(z_where_loc, z_where_scale)
.to_event(1))
# New. Crop a small window from the input.
x_att = image_to_object(z_where, data)
# Compute the parameter of the distribution over z_what
# by passing the window through the encoder network.
z_what_loc, z_what_scale = encode(x_att)
z_what = pyro.sample('z_what_{}'.format(t),
dist.Normal(z_what_loc, z_what_scale)
.to_event(1))
return # values for next step
"""
Explanation: This would be a reasonable guide to use with this model, but the paper describes a crucial improvement we can make to the code above. Recall that the guide will output information about an object's pose and its latent code at each step. The improvement we can make is based on the observation that once we have inferred the pose of an object, we can do a better job of inferring its latent code if we use the pose information to crop the object from the input image, and pass the result (which we'll call a "window") through an additional network in order to compute the parameters of the latent code. We'll call this additional network the "encoder" below.
Here's how we can implement this improved guide, and a fleshed out implementation of the networks involved:
End of explanation
"""
def z_where_inv(z_where):
# Take a batch of z_where vectors, and compute their "inverse".
# That is, for each row compute:
# [s,x,y] -> [1/s,-x/s,-y/s]
# These are the parameters required to perform the inverse of the
# spatial transform performed in the generative model.
n = z_where.size(0)
out = torch.cat((torch.ones([1, 1]).type_as(z_where).expand(n, 1), -z_where[:, 1:]), 1)
out = out / z_where[:, 0:1]
return out
def image_to_object(z_where, image):
n = image.size(0)
theta_inv = expand_z_where(z_where_inv(z_where))
grid = affine_grid(theta_inv, torch.Size((n, 1, 20, 20)))
out = grid_sample(image.view(n, 1, 50, 50), grid)
return out.view(n, -1)
"""
Explanation: Since we would like to maintain differentiability of the guide we again use a STN to perform the required "cropping". The image_to_object function performs the opposite transform to the object_to_image function used in the guide. That is, the former takes a small image and places it on a larger image, and the latter crops a small image from a larger image.
End of explanation
"""
bl_rnn = nn.LSTMCell(2554, 256)
bl_predict = nn.Linear(256, 1)
# Use an RNN to compute the baseline value. This network takes the
# input images and the values samples so far as input.
def baseline_step(x, prev):
rnn_input = torch.cat((x,
prev.z_where.detach(),
prev.z_what.detach(),
prev.z_pres.detach()), 1)
bl_h, bl_c = bl_rnn(rnn_input, (prev.bl_h, prev.bl_c))
bl_value = bl_predict(bl_h) * prev.z_pres
return bl_value, bl_h, bl_c
"""
Explanation: Another perspective
So far we've considered the model and the guide in isolation, but we gain an interesting perspective if we zoom out and look at the model and guide computation as a whole. Doing so, we see that at each step AIR includes a sub-computation that has the same structure as a Variational Auto-encoder (VAE).
To see this, notice that the guide passes the window through a neural network (the encoder) to generate the parameters of the distribution over a latent code, and the model passes samples from this latent code distribution through another neural network (the decoder) to generate an output window. This structure is highlighted in the following figure, reproduced from [1]:
From this perspective AIR is seen as a sequential variant of the VAE. The act of cropping a small window from the input image serves to restrict the attention of a VAE to a small region of the input image at each step; hence "Attend, Infer, Repeat".
Inference
As we mentioned in the introduction, successfully performing inference in this model is a challenge. In particular, the presence of discrete choices in the model makes inference trickier than in a model in which all choices can be reparameterized. The underlying problem we face is that the gradient estimates we use in the optimization performed by variational inference have much higher variance in the presence of non-reparameterizable choices.
To bring this variance under control, the paper applies a technique called "data dependent baselines" (AKA "neural baselines") to the discrete choices in the model.
Data dependent baselines
Happily for us, Pyro includes support for data dependent baselines. If you are not already familiar with this idea, you might want to read our introduction before continuing. As model authors we only have to implement the neural network, pass it our data as input, and feed its output to pyro.sample. Pyro's inference back-end will ensure that the baseline is included in the gradient estimator used for inference, and that the network parameters are updated appropriately.
Let's see how we can add data dependent baselines to our AIR implementation. We need a neural network that can output a (scalar) baseline value at each discrete choice in the guide, having received a multi-mnist image and the values sampled by the guide so far as input. Notice that this is very similar to the structure of the guide network, and indeed we will again use a recurrent network.
To implement this we will first write a short helper function that implements a single step of the RNN we've just described:
End of explanation
"""
GuideState = namedtuple('GuideState', ['h', 'c', 'bl_h', 'bl_c', 'z_pres', 'z_where', 'z_what'])
def initial_guide_state(n):
return GuideState(h=torch.zeros(n, 256),
c=torch.zeros(n, 256),
bl_h=torch.zeros(n, 256),
bl_c=torch.zeros(n, 256),
z_pres=torch.ones(n, 1),
z_where=torch.zeros(n, 3),
z_what=torch.zeros(n, 50))
def guide_step(t, data, prev):
rnn_input = torch.cat((data, prev.z_where, prev.z_what, prev.z_pres), 1)
h, c = rnn(rnn_input, (prev.h, prev.c))
z_pres_p, z_where_loc, z_where_scale = predict(h)
# Here we compute the baseline value, and pass it to sample.
baseline_value, bl_h, bl_c = baseline_step(data, prev)
z_pres = pyro.sample('z_pres_{}'.format(t),
dist.Bernoulli(z_pres_p * prev.z_pres)
.to_event(1),
infer=dict(baseline=dict(baseline_value=baseline_value.squeeze(-1))))
z_where = pyro.sample('z_where_{}'.format(t),
dist.Normal(z_where_loc, z_where_scale)
.mask(z_pres)
.to_event(1))
x_att = image_to_object(z_where, data)
z_what_loc, z_what_scale = encode(x_att)
z_what = pyro.sample('z_what_{}'.format(t),
dist.Normal(z_what_loc, z_what_scale)
.mask(z_pres)
.to_event(1))
return GuideState(h=h, c=c, bl_h=bl_h, bl_c=bl_c, z_pres=z_pres, z_where=z_where, z_what=z_what)
def guide(data):
# Register networks for optimization.
pyro.module('rnn', rnn),
pyro.module('predict', predict),
pyro.module('encode', encode),
pyro.module('bl_rnn', bl_rnn)
pyro.module('bl_predict', bl_predict)
with pyro.plate('data', data.size(0), subsample_size=64) as indices:
batch = data[indices]
state = initial_guide_state(batch.size(0))
steps = []
for t in range(3):
state = guide_step(t, batch, state)
steps.append(state)
return steps
"""
Explanation: There are two important details to highlight here:
First, we detach values sampled by the guide before passing them to the baseline network. This is important as the baseline network and the guide network are entirely separate networks optimized with different objectives. Without this, gradients would flow from the baseline network into the guide network. When using data dependent baselines we must do this whenever we feed values sampled by the guide into the baselines network. (If we don't we'll trigger a PyTorch run-time error.)
Second, we multiply the output of the baseline network by the value of z_pres from the previous step. This relieves the baseline network from the burdon of having to output accurate predictions for completed samples. (The outputs for completed samples will be multiplied by zero, so the derivative of the baseline loss for these outputs will be zero.) It's OK to do this because in effect we've already removed random choices for completed samples from the inference objective, so there's no need to apply any variance reduction to them.
We now have everything we need to complete the implementation of the guide. Our final guide_step function will be very similar to guide_step_improved introduced above. The only changes are:
We now call the baseline_step helper and pass the baseline value it returns to pyro.sample.
We now mask out the z_where and z_what choices for complete sample. This serves exactly the same purpose as the masks added to the model. (See the earlier discussion for the motivation behind this change.)
We'll also write a guide function that will iterate guide_step in order to provide a guide for the whole model.
End of explanation
"""
data = mnist.view(-1, 50 * 50)
svi = SVI(model,
guide,
optim.Adam({'lr': 1e-4}),
loss=TraceGraph_ELBO())
for i in range(5):
loss = svi.step(data)
print('i={}, elbo={:.2f}'.format(i, loss / data.size(0)))
"""
Explanation: Putting it all together
We have now completed the implementation of the model and the guide. As we have seen in earlier tutorials, we need write only a few more lines of code to begin performing inference:
End of explanation
"""
|
abevieiramota/data-science-cookbook
|
2016/naive-bayes/NaiveBayesAlgorithm.ipynb
|
mit
|
from collections import defaultdict
from functools import reduce
import math
class NaiveBayes:
def __init__(self):
self.freqFeature = defaultdict(int)
self.freqLabel = defaultdict(int)
# condFreqFeature[label][feature]
self.condFreqFeature = defaultdict(lambda: defaultdict(int))
"""
Explanation: Naive Bayes
Vamos criar a classe Naive Bayes para representar o nosso algoritmo.
O método init representa o construtor, inicializando as variáveis do nosso modelo.
O modelo gerado é formado basicamente pela frequência das palavras, que em nosso caso, representa os possíveis valores de cada feature e label.
O defaultdict é utilizado para inicializar nosso dicionário com valores default, no caso 0 (int), para chaves que tentamos acessar e ainda não foram adicionadas.
End of explanation
"""
def countFrequencies(self)
def countCondFrequencies(self)
"""
Explanation: Modelo
Como o modelo é representado basicamente pela frequência das palavras, precisamos categorizar os possíveis valores das features. Após esse processo, fazemos a contagem.
countFrequencies: faz a contagem que cada valor de feature e label aparecem em todo o dataset de treino, independentemente.
countCondFrequencies: faz a contagem que cada valor de feature aparece para cada possível label.
End of explanation
"""
def train(self, dataSet_x, dataSet_y)
"""
Explanation: Treino
Vamos treinar o nosso modelos. O que deve ser composto na função de treino?
End of explanation
"""
def predict(self, dataSet_x):
# Correcao de Laplace
# P( f | l) = (freq( f | l ) + laplace*) / ( freq(l)** + qnt(distinct(f))*** )
#
# * -> laplace smoothing: add 1
# ** -> Frequencia com que o valor de label aparece
# *** -> Quantidade de features distintas
#
# Devido a possibilidade de underflow de pontos flutuantes, eh interessante fazer
# P(x1|l)*P(x2|l) ... -> exp(Log(P(x1|l)) + Log(P(x2|l))) ...
"""
Explanation: Classificação
Com o modelos em mãos, agora podemos classificar nosso dataset. Abaixo, segue algumas dicas para tratarmos melhor os dados em nossa função.
End of explanation
"""
import random
# Car dataset
# Attribute Information:
#
# Class Values:
#
# unacc, acc, good, vgood
#
# Attributes:
#
# buying: vhigh, high, med, low.
# maint: vhigh, high, med, low.
# doors: 2, 3, 4, 5more.
# persons: 2, 4, more.
# lug_boot: small, med, big.
# safety: low, med, high.
#Retur dataset
def readFile(path):
rawDataset = open(path, 'r')
suffix = ['_buy', '_maint', '_doors', '_pers', '_lug', '_safety', '_class']
dataset = []
rawDataset.seek(0)
for line in rawDataset:
l = line.split(',')
l[-1] = l[-1].replace("\n", "")
newTuple = map(lambda (x,y): x+y, zip( l , suffix))
dataset.append( newTuple )
return dataset
def main():
preparedDataset = readFile('carData.txt')
random.shuffle(preparedDataset)
dataset = []
#Features
dataset.append([])
#Label
dataset.append([])
for t in preparedDataset:
dataset[0].append(t[:-1])
dataset[1].append(t[-1])
dataSet_x = dataset[0]
dataSet_y = dataset[1]
nTuples = len(dataSet_x)
nToTrain = int(nTuples * 0.7)
dataSet_x_train = dataSet_x[:nToTrain]
dataSet_y_train = dataSet_y[:nToTrain]
dataSet_x_test = dataSet_x[nToTrain:]
dataSet_y_test = dataSet_y[nToTrain:]
naive = NaiveBayes()
naive.train(dataSet_x_train, dataSet_y_train)
accuracy = 0.0
results = naive.predict(dataSet_x_test)
for index, r in enumerate(results):
yPredicted = max(r, key=r.get)
y = dataSet_y_test[index]
if(y == yPredicted):
accuracy += 1.0
print accuracy / len(dataSet_y_test)
main()
"""
Explanation: Pré-processamento
Abaixo uma função de suporte para a leitura do nosso dataset. Em seguida, um processo de separação dos dados entre dados de treino e teste.
End of explanation
"""
|
jameshensman/GPclust
|
notebooks/OMGP_demo.ipynb
|
gpl-3.0
|
%matplotlib inline
import GPy
from GPclust import OMGP
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12,6)
from matplotlib import pyplot as plt
"""
Explanation: Overlapping Mixtures of Gaussian Processses
Valentine Svensson 2015 <br> (with small edits by James Hensman November 2015)
This illustrates use of the OMGP model described in
Overlapping Mixtures of Gaussian Processes for the data association problem
Miguel Lázaro-Gredilla, Steven Van Vaerenbergh, Neil D. Lawrence
Pattern Recognition 2012
The GPclust implementation makes use of the collapsed variational mixture model for GP assignment.
End of explanation
"""
XY = np.loadtxt('../data/split_data_test.csv', delimiter=',', skiprows=1, usecols=[1, 2])
X = XY[:, 0, None]
Y = XY[:, 1, None]
plt.scatter(X, Y);
"""
Explanation: Diverging trend seperation
One application of the OMGP model could be to find diverging trends among populations over time. Imagine for example two species evolving from a common ancestor over time.
We load some pre-generated data which diverge over time.
End of explanation
"""
m = OMGP(X, Y, K=2, variance=0.01, prior_Z='DP')
m.log_likelihood()
"""
Explanation: We define a model assuming K = 2 trends. By default the model will be populated by K RBF kernels. The OMGP implementation is compatible with most kernels in GPy, so that you for example can encode periodicity in the model.
End of explanation
"""
m.plot()
"""
Explanation: A simple plot function is included which illustrates the asignment probability for each data point, it also shows the posterior mean and confidence intervals for each Gaussian Process.
End of explanation
"""
m.plot_probs(gp_num=0)
"""
Explanation: There is also a function for plotting the assignment probability for a given GP directly. Since we haven't optimized the mixture parameters yet the assignment probability is just a random draw from the prior.
End of explanation
"""
m.optimize(step_length=0.01, maxiter=20)
m.plot()
m.plot_probs()
"""
Explanation: We can first performa a quick optimization to find the rough trends.
End of explanation
"""
m
"""
Explanation: The model identifies the branches of the time series, and in particular the non-branched region have ambigous GP assignment. In this region the two trends share information for prediction.
Like any GPy model the hyper parameters can be inspected.
End of explanation
"""
m.optimize(step_length=0.01, maxiter=200)
m
m.plot()
m.plot_probs()
"""
Explanation: We continue by letting the model optimize some more, and also allow it to optimize the hyper parameters. The hyper parameter optimization works best if the mixture parameters have converged or are close to converging.
End of explanation
"""
x1 = np.random.uniform(0, 10, (100, 1))
x2 = np.random.uniform(0, 10, (100, 1))
y1 = 4 * np.random.randn(*x1.shape)
y2 = 3 * np.sin(x2) + 0.5 * np.random.randn(*x2.shape)
x = np.vstack((x1, x2))
y = np.vstack((y1, y2))
plt.scatter(x, y);
"""
Explanation: Separating signal from noise
An interesting application of the OMGP model pointed out in the original publication is the use for robust GP regression.
Let's illustrate this by creating sinusoidal test data with background noise.
End of explanation
"""
kernels = [GPy.kern.RBF(1)]
m = OMGP(x, y, K=1, prior_Z='DP', kernels=kernels)
m.variance = 3
m.hyperparam_interval = 100
m.rbf.lengthscale = 2
m.optimize(verbose=False)
m.plot()
"""
Explanation: First we make a model with only one mixture component / kernel. This is equivalent to normal GP regression.
End of explanation
"""
kernels = [GPy.kern.White(1, name='Noise'), GPy.kern.RBF(1, name='Signal')]
m = OMGP(x, y, K=2, prior_Z='DP', kernels=kernels)
m.variance = 3
m.hyperparam_interval = 250
m.Signal.lengthscale = 2
m.plot(0)
m.optimize(step_length=0.01, verbose=False)
m
m.plot()
"""
Explanation: Now we in stead view this is a mixture problem, and consider two different kinds of kernels for the different GP components. One encoding white noise, and another which can encode a trend over time (an RBF kernel in this case).
End of explanation
"""
m.plot_probs(0)
plt.axhline(0.75);
thr = 0.75
idx = np.where(m.phi[:,0] < thr)[0]
nidx = np.where(m.phi[:,0] >= thr)[0]
plt.figure(figsize=(12,10))
plt.subplot(211)
plt.scatter(x[idx], y[idx]);
plt.title('Signal')
plt.subplot(212, sharey=plt.gca())
plt.scatter(x[nidx], y[nidx]);
plt.title('Noise');
"""
Explanation: The trend over time is much more noticable, and the confidence intervals are smaller.
Noisy points will have high assignment probability to the 'noise GP', while the assignment of the sinusoidal points is ambiguous. We can use this to seperate the points which are more likely to be noise from the remaining points.
End of explanation
"""
|
google/jax-md
|
notebooks/npt_simulation.ipynb
|
apache-2.0
|
%%capture
#@title Imports & Utils
!pip install jax-md
import numpy as onp
from jax.config import config ; config.update('jax_enable_x64', True)
import jax.numpy as np
from jax import random
from jax import jit
from jax import lax
from jax import ops
import time
from jax_md import space, smap, energy, minimize, quantity, simulate
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='white')
def format_plot(x, y):
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 1)):
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
"""
Explanation: <a href="https://colab.sandbox.google.com/github/google/jax-md/blob/main/notebooks/npt_simulation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
"""
N = 400
dimension = 2
box = quantity.box_size_at_number_density(N, 0.8, 2)
dt = 5e-3
displacement, shift = space.periodic_general(box)
kT = np.float32(0.01)
P = lambda t: np.where(t < 100.0, 0.05, 0.15)
"""
Explanation: Constant Pressure Simulation
Here we demonstrate some code to run a simulation at in the NPT ensemble with constant temperature and pressure. We start off by setting up some parameters of the simulation. This will include a pressure schedule that will start off at a relatively low pressure before instantaneously trippling the pressure.
Note that unlike in the case of NVT and NVE simulations, NPT simulations must be performed with periodic_general boundary conditions. For now NPT simulations must be performed with fractional coordinates, where the atom positions are stored in the unit cube. This restriction can likely be relaxed in the future, if it were desirable.
End of explanation
"""
key = random.PRNGKey(0)
key, split = random.split(key)
R = random.uniform(split, (N, dimension), dtype=np.float64)
# The system ought to be a 50:50 mixture of two types of particles, one
# large and one small.
sigma = np.array([[1.0, 1.2], [1.2, 1.4]])
N_2 = int(N / 2)
species = np.where(np.arange(N) < N_2, 0, 1)
"""
Explanation: Next we need to generate some random positions as well as particle sizes. Because we are using periodic_general boundary conditions with fractional coordinates, we produce initial particle positions in the unit cube.
End of explanation
"""
energy_fn = energy.soft_sphere_pair(displacement, species=species, sigma=sigma)
init, apply = simulate.npt_nose_hoover(energy_fn, shift, dt, P(0.), kT)
state = init(key, R, box)
"""
Explanation: Then we need to construct our simulation operators.
End of explanation
"""
write_every = 100
def step_fn(i, state_and_log):
state, log = state_and_log
t = i * dt
# Log information about the simulation.
T = quantity.temperature(state.velocity)
log['kT'] = log['kT'].at[i].set(T)
box = simulate.npt_box(state)
KE = quantity.kinetic_energy(state.velocity)
P_measured = quantity.pressure(energy_fn, state.position, box, KE)
log['P'] = log['P'].at[i].set(P_measured)
H = simulate.npt_nose_hoover_invariant(energy_fn, state, P(t), kT)
log['H'] = log['H'].at[i].set(H)
# Record positions every `write_every` steps.
pos = space.transform(box, state.position)
log['position'] = lax.cond(i % write_every == 0,
lambda p: p.at[i // write_every].set(pos),
lambda p: p,
log['position'])
# Take a simulation step.
state = apply(state, pressure=P(t))
return state, log
"""
Explanation: Now let's actually do the simulation. To do this we'll write a small function that performs a single step of the simulation. This function will keep track of the temperature, the extended Hamiltonian of the Nose-Hoover dynamics, and the current particle positions.
End of explanation
"""
steps = 40000
log = {
'kT': np.zeros((steps,)),
'P': np.zeros((steps,)),
'H': np.zeros((steps,)),
'position': np.zeros((steps // write_every,) + R.shape)
}
state, log = lax.fori_loop(0, steps, step_fn, (state, log))
"""
Explanation: To run our simulation we'll use lax.fori_loop which will execute the simulation a single call from python.
End of explanation
"""
plt.subplot(1, 3, 1)
t = onp.arange(0, steps) * dt
plt.plot(t, log['kT'], linewidth=3)
plt.plot(t, kT * np.ones_like(t), linewidth=3)
plt.xlim([t[0], t[-1]])
format_plot('$t$', '$T$')
plt.subplot(1, 3, 2)
t = onp.arange(0, steps) * dt
plt.plot(t, log['P'], linewidth=3)
plt.plot(t, P(t), linewidth=3)
plt.xlim([t[0], t[-1]])
format_plot('$t$', '$P$')
plt.subplot(1, 3, 3)
t = onp.arange(0, steps) * dt
plt.plot(t, log['H'], linewidth=3)
plt.xlim([t[0], t[-1]])
format_plot('$t$', '$H$')
finalize_plot((2, 2 / 3))
"""
Explanation: Now, let's plot the temperature, pressure, and hamiltonian as a function of time. We see that the temperature and pressure track the target with some fluctuations. The Hamiltonian is exactly invariant apart from the point where the target pressure is changed discontinuously.
End of explanation
"""
ms = 65
R_plt = onp.array(log['position'][-1])
plt.plot(R_plt[:N_2, 0], R_plt[:N_2, 1], 'o', markersize=ms * 0.5)
plt.plot(R_plt[N_2:, 0], R_plt[N_2:, 1], 'o', markersize=ms * 0.7)
plt.xlim([0, np.max(R_plt[:, 0])])
plt.ylim([0, np.max(R_plt[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
"""
Explanation: Now let's plot a snapshot of the system.
End of explanation
"""
from jax_md.colab_tools import renderer
diameters = sigma[species, species]
colors = np.where(species[:, None],
np.array([[1.0, 0.5, 0.01]]),
np.array([[0.35, 0.65, 0.85]]))
renderer.render(box,
{
'particles': renderer.Disk(log['position'],
diameters,
colors)
},
resolution=(700, 700))
"""
Explanation: If we want, we can also draw an animation of the simulation using JAX MD's renderer. We see that the system starts out fluctuating about an initial larger box. When the pressure instantaneously changes, the box compresses the system.
End of explanation
"""
N = 10000
dt = 5e-3
box = quantity.box_size_at_number_density(N, 0.8, 2) * np.eye(2)
displacement, shift = space.periodic_general(box)
kT = np.float32(0.01)
P = lambda t: np.where(t < 100.0, 0.05, 0.07)
"""
Explanation: Larger Simulation with Neighbor Lists
Warning: This section is a work in progress. We hope to make NPT + neighbor lists more ergonimic and safe in the future.
We can use neighbor lists to run a much larger version of this simulation. As their name suggests, neighbor lists are lists of particles nearby a central particle. By keeping track of neighbors, we can compute the energy of the system much more efficiently. This becomes increasingly true as the simulation gets larger. Unlike other simulation environments, extra care must be taken with NPT simulations when using cell lists to construct neighbor lists (which is the default behavior). This is because the cells must be defined in the unit cube. As the system's volume changes, the effective size of cells will change. At some point, this may invalidate cell list, either because of buffer overflows or because the cells become too small to cover the desired neighborhood size. While we have error checking in the former case, we do not yet have checks for the latter.
The code in this section therefore is to serve as an example for how neighbor lists + NPT might work. We expect to improve this section with time. If it is a priority for your work, please raise an issue.
As before, the first step here is to setup some simulation parameters. Unlike before, here we must be especially mindful of fluctuations in the box size. As such we will start out by creating the system and randomly initializing it. However, we will then minimize the system to its nearest minimum before starting the simulation.
End of explanation
"""
key, split = random.split(key)
R = random.uniform(split, (N, dimension), dtype=np.float64)
sigma = np.array([[1.0, 1.2], [1.2, 1.4]])
N_2 = int(N / 2)
species = np.where(np.arange(N) < N_2, 0, 1)
"""
Explanation: As before we randomly initialize the system.
End of explanation
"""
neighbor_fn, energy_fn = energy.soft_sphere_neighbor_list(
displacement, box, species=species, sigma=sigma, fractional_coordinates=True)
init, apply = minimize.fire_descent(energy_fn, shift)
nbrs = neighbor_fn.allocate(R, extra_capacity=5)
state = init(R, neighbor=nbrs)
def cond_fn(state_nbrs):
state, nbrs = state_nbrs
return np.any(np.abs(state.force) > 1e-3)
def step_fn(state_nbrs):
state, nbrs = state_nbrs
state = apply(state, neighbor=nbrs)
nbrs = nbrs.update(state.position)
return state, nbrs
state, nbrs = lax.while_loop(cond_fn, step_fn, (state, nbrs))
print(f'Did buffer overflow: {nbrs.did_buffer_overflow}')
print(f'Pressure: {quantity.pressure(energy_fn, state.position, box, neighbor=nbrs)}')
"""
Explanation: Then we need to construct our simulation operators. This time we use the energy.soft_sphere_neighbor_fn to create two functions: one that constructs lists of neighbors and one that computes the energy. Since we store the particle positions fractionally (in the unit cube), we must pass fractional_coordinates=True to the energy function.
End of explanation
"""
ms = 10
R_plt = onp.array(state.position)
plt.plot(R_plt[:N_2, 0], R_plt[:N_2, 1], 'o', markersize=ms * 0.5)
plt.plot(R_plt[N_2:, 0], R_plt[N_2:, 1], 'o', markersize=ms * 0.7)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
"""
Explanation: Now we can plot the minimized configuration.
End of explanation
"""
init, apply = simulate.npt_nose_hoover(energy_fn, shift, dt, P(0.), kT)
nbrs = neighbor_fn.allocate(state.position)
state = init(key, state.position, box, neighbor=nbrs)
"""
Explanation: Now that we have a minimized configuration, we can do an NPT simulation. Since our cells have a fixed size, the neighbor list that we constructed will become invalid if the box is too small.
End of explanation
"""
write_every = 100
def step_fn(i, state_nbrs_log):
state, nbrs, log = state_nbrs_log
t = i * dt
# Log information about the simulation.
T = quantity.temperature(state.velocity)
log['kT'] = log['kT'].at[i].set(T)
box = simulate.npt_box(state)
KE = quantity.kinetic_energy(state.velocity)
P_measured = quantity.pressure(energy_fn, state.position, box, KE, neighbor=nbrs)
log['P'] = log['P'].at[i].set(P_measured)
H = simulate.npt_nose_hoover_invariant(energy_fn, state, P(t), kT, neighbor=nbrs)
log['H'] = log['H'].at[i].set(H)
# Record positions every `write_every` steps.
pos = space.transform(box, state.position)
log['position'] = lax.cond(i % write_every == 0,
lambda p: p.at[i // write_every].set(pos),
lambda p: p,
log['position'])
# Take a simulation step.
state = apply(state, neighbor=nbrs, pressure=P(t))
box = simulate.npt_box(state)
nbrs = nbrs.update(state.position, box=box)
return state, nbrs, log
"""
Explanation: Now let's actually do the simulation. This time our simulation step function will also update the neighbors. As above, we will also only record position data every hundred steps.
End of explanation
"""
steps = 40000
log = {
'P': np.zeros((steps,)),
'kT': np.zeros((steps,)),
'H': np.zeros((steps,)),
'position': np.zeros((steps // write_every,) + R.shape)
}
state, nbrs, log = lax.fori_loop(0, steps, step_fn, (state, nbrs, log))
print(nbrs.did_buffer_overflow)
R = state.position
"""
Explanation: To run our simulation we'll use lax.fori_loop which will execute the simulation a single call from python.
End of explanation
"""
plt.subplot(1, 3, 1)
t = onp.arange(0, steps) * dt
plt.plot(t, log['kT'], linewidth=3)
plt.plot(t, kT * np.ones_like(t), linewidth=3)
plt.xlim([t[0], t[-1]])
format_plot('$t$', '$T$')
plt.subplot(1, 3, 2)
t = onp.arange(0, steps) * dt
plt.plot(t, log['P'], linewidth=3)
plt.plot(t, P(t), linewidth=3)
plt.xlim([t[0], t[-1]])
format_plot('$t$', '$P$')
plt.subplot(1, 3, 3)
t = onp.arange(0, steps) * dt
plt.plot(t, log['H'], linewidth=3)
plt.xlim([t[0], t[-1]])
format_plot('$t$', '$H$')
finalize_plot((2, 2 / 3))
"""
Explanation: Now, let's plot the temperature as a function of time. We see that the temperature tracks the goal temperature with some fluctuations.
End of explanation
"""
ms = 10
R_plt = onp.array(state.position)
plt.plot(R_plt[:N_2, 0], R_plt[:N_2, 1], 'o', markersize=ms * 0.5)
plt.plot(R_plt[N_2:, 0], R_plt[N_2:, 1], 'o', markersize=ms * 0.7)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
"""
Explanation: Now let's plot a snapshot of the system.
End of explanation
"""
from jax_md.colab_tools import renderer
diameters = sigma[species, species]
colors = np.where(species[:, None],
np.array([[1.0, 0.5, 0.01]]),
np.array([[0.35, 0.65, 0.85]]))
renderer.render(box[0, 0],
{
'particles': renderer.Disk(log['position'],
diameters,
colors)
},
buffer_size=20,
resolution=(700, 700))
"""
Explanation: If we want, we can also draw an animation of the simulation using JAX MD's renderer.
End of explanation
"""
V_flat = onp.reshape(onp.array(state.velocity), (-1,))
occ, bins = onp.histogram(V_flat, bins=100, normed=True)
T_cur = kT
plt.semilogy(bins[:-1], occ, 'o')
plt.semilogy(
bins[:-1],
1.0 / np.sqrt(2 * np.pi * T_cur) * onp.exp(-1/(2 * T_cur) * bins[:-1] ** 2),
linewidth=3)
format_plot('t', 'T')
finalize_plot()
"""
Explanation: Finally, let's plot the velocity distribution compared with its theoretical prediction.
End of explanation
"""
|
NazBen/impact-of-dependence
|
examples/archive/grid-search-Copy1.ipynb
|
mit
|
import openturns as ot
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
random_state = 123
np.random.seed(random_state)
"""
Explanation: Conservative Estimation using a Grid Seach Minimization
This notebook illustrates the different steps for a conservative estimation using a grid search minimization.
Classic Libraries
End of explanation
"""
from depimpact.tests import func_sum
help(func_sum)
"""
Explanation: Additive model
The first example of conservative estimation consider an additive model $\eta : \mathbb R^d \rightarrow \mathbb R$ with Gaussian margins. The objectives are to estimate a quantity of interest $\mathcal C(Y)$ of the model output distribution. Unfortunately, the dependence structure is unknown. In order to be conservative we aim to give bounds to $\mathcal C(Y)$.
The model
This example consider the simple additive example.
End of explanation
"""
dim = 2
margins = [ot.Normal()]*dim
"""
Explanation: Dimension 2
We consider the problem in dimension $d=2$ and a number of pairs $p=1$ for gaussian margins.
End of explanation
"""
families = np.zeros((dim, dim), dtype=int)
families[1, 0] = 1
"""
Explanation: Copula families
We consider a gaussian copula for this first example
End of explanation
"""
from depimpact import ConservativeEstimate
quant_estimate = ConservativeEstimate(model_func=func_sum, margins=margins, families=families)
"""
Explanation: Estimations
We create an instance of the main class for a conservative estimate.
End of explanation
"""
n = 1000
indep_result = quant_estimate.independence(n_input_sample=n, random_state=random_state)
"""
Explanation: First, we compute the quantile at independence
End of explanation
"""
from dependence import quantile_func
alpha = 0.05
q_func = quantile_func(alpha)
indep_result.q_func = q_func
"""
Explanation: We aim to minimize the output quantile. To do that, we create a q_func object from the function quantile_func to associate a probability $\alpha$ to a function that computes the empirical quantile from a given sample.
End of explanation
"""
sns.jointplot(indep_result.input_sample[:, 0], indep_result.input_sample[:, 1]);
h = sns.distplot(indep_result.output_sample_id, axlabel='Model output', label="Output Distribution")
plt.plot([indep_result.quantity]*2, h.get_ylim(), label='Quantile at %d%%' % (alpha*100))
plt.legend(loc=0)
print('Output quantile :', indep_result.quantity)
"""
Explanation: The computation returns a DependenceResult instance. This object gather the informations of the computation. It also computes the output quantity of interest (which can also be changed).
End of explanation
"""
indep_result.compute_bootstrap(n_bootstrap=5000)
"""
Explanation: A boostrap can be done on the output quantity
End of explanation
"""
sns.distplot(indep_result.bootstrap_sample, axlabel='Output quantile');
ci = [0.025, 0.975]
quantity_ci = indep_result.compute_quantity_bootstrap_ci(ci)
h = sns.distplot(indep_result.output_sample_id, axlabel='Model output', label="Output Distribution")
plt.plot([indep_result.quantity]*2, h.get_ylim(), 'g-', label='Quantile at %d%%' % (alpha*100))
plt.plot([quantity_ci[0]]*2, h.get_ylim(), 'g--', label='%d%% confidence intervals' % ((1. - (ci[0] + 1. - ci[1]))*100))
plt.plot([quantity_ci[1]]*2, h.get_ylim(), 'g--')
plt.legend(loc=0)
print('Quantile at independence: %.2f with a C.O.V at %.1f %%' % (indep_result.boot_mean, indep_result.boot_cov))
"""
Explanation: And we can plot it
End of explanation
"""
%%snakeviz
K = 500
n = 10000
grid_type = 'lhs'
dep_measure = 'parameter'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, dep_measure=dep_measure,
random_state=random_state)
"""
Explanation: Grid Search Approach
Firstly, we consider a grid search approach in order to compare the perfomance with the iterative algorithm. The discretization can be made on the parameter space or on other concordance measure such as the kendall's Tau. This below example shows a grid-search on the parameter space.
End of explanation
"""
print('The computation did %d model evaluations.' % (grid_result.n_evals))
"""
Explanation: The computation returns a ListDependenceResult which is a list of DependenceResult instances and some bonuses.
End of explanation
"""
grid_result.q_func = q_func
min_result = grid_result.min_result
print('Minimum quantile: {} at param: {}'.format(min_result.quantity, min_result.dep_param))
"""
Explanation: Lets set the quantity function and search for the minimum among the grid results.
End of explanation
"""
plt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles')
plt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='minimum')
plt.xlabel('Dependence parameter')
plt.ylabel('Quantile value')
plt.legend(loc=0);
"""
Explanation: We can plot the result in grid results. The below figure shows the output quantiles in function of the dependence parameters.
End of explanation
"""
grid_result.compute_bootstraps(n_bootstrap=5000)
boot_min_quantiles = grid_result.bootstrap_samples.min(axis=0)
boot_argmin_quantiles = grid_result.bootstrap_samples.argmin(axis=0).ravel().tolist()
boot_min_params = [grid_result.dep_params[idx][0] for idx in boot_argmin_quantiles]
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
sns.distplot(boot_min_quantiles, axlabel="Minimum quantiles", ax=axes[0])
sns.distplot(boot_min_params, axlabel="Parameters of the minimum", ax=axes[1])
"""
Explanation: As for the individual problem, we can do a boostrap also, for each parameters. Because we have $K$ parameters, we can do a bootstrap for the $K$ samples, compute the $K$ quantiles for all the bootstrap and get the minimum quantile for each bootstrap.
End of explanation
"""
# The parameter with most occurence
boot_id_min = max(set(boot_argmin_quantiles), key=boot_argmin_quantiles.count)
boot_min_result = grid_result[boot_id_min]
boot_mean = boot_min_result.bootstrap_sample.mean()
boot_std = boot_min_result.bootstrap_sample.std()
print('Worst Quantile: {} at {} with a C.O.V of {} %'.format(boot_min_result.boot_mean, min_result.dep_param, boot_min_result.boot_cov*100.))
"""
Explanation: For the parameter that have the most occurence for the minimum, we compute its bootstrap mean.
End of explanation
"""
plt.plot(grid_result.kendalls, grid_result.quantities, '.', label='Quantiles')
plt.plot(min_result.kendall_tau, min_result.quantity, 'ro', label='Minimum quantile')
plt.xlabel("Kendall's tau")
plt.ylabel('Quantile')
plt.legend(loc=0);
"""
Explanation: Kendall's Tau
An interesting feature is to convert the dependence parameters to Kendall's Tau values.
End of explanation
"""
bounds_tau = np.asarray([[0., 0.7], [0.1, 0.]])
quant_estimate.bounds_tau = bounds_tau
K = 20
n = 10000
grid_type = 'lhs'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)
grid_result.q_func = q_func
min_result = grid_result.min_result
print('Minimum quantile: {} at param: {}'.format(min_result.quantity, min_result.dep_param))
plt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles')
plt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='minimum')
plt.xlabel('Dependence parameter')
plt.ylabel('Quantile value')
plt.legend(loc=0);
"""
Explanation: As we can see, the bounds
With bounds on the dependencies
An interesting option in the ConservativeEstimate class is to bound the dependencies, due to some prior informations.
End of explanation
"""
filename = './result.hdf'
grid_result.to_hdf(filename)
from dependence import ListDependenceResult
load_grid_result = ListDependenceResult.from_hdf(filename, q_func=q_func, with_input_sample=False)
np.testing.assert_array_equal(grid_result.output_samples, load_grid_result.output_samples)
import os
os.remove(filename)
"""
Explanation: Saving the results
It is usefull to save the result in a file to load it later and compute other quantities or anything you need!
End of explanation
"""
K = None
n = 1000
grid_type = 'vertices'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)
grid_result.q_func = q_func
print("Kendall's Tau : {}, Quantile: {}".format(grid_result.kendalls.ravel(), grid_result.quantities))
from depimpact.plots import matrix_plot_input
matrix_plot_input(grid_result.min_result);
"""
Explanation: Taking the extreme values of the dependence parameter
If the output quantity of interest seems to have a monotonicity with the dependence parameter, it is better to directly take the bounds of the dependence problem. Obviously, the minimum should be at the edges of the design space
End of explanation
"""
dim = 5
quant_estimate.margins = [ot.Normal()]*dim
"""
Explanation: Higher Dimension
We consider the problem in dimension $d=5$.
End of explanation
"""
families = np.zeros((dim, dim), dtype=int)
families[2, 0] = 1
quant_estimate.families = families
families
quant_estimate.bounds_tau = None
quant_estimate.bounds_tau
"""
Explanation: Copula families with one dependent pair
We consider a gaussian copula for this first example, but for the moment only one pair is dependent.
End of explanation
"""
quant_estimate.vine_structure
"""
Explanation: We reset the families and bounds for the current instance. (I don't want to create a new instance, just to check if the setters are good).
End of explanation
"""
K = 20
n = 10000
grid_type = 'vertices'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)
"""
Explanation: Let's do the grid search to see
End of explanation
"""
grid_result.q_func = q_func
min_result = grid_result.min_result
print('Worst Quantile: {} at {}'.format(min_result.quantity, min_result.dep_param))
matrix_plot_input(min_result)
plt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles')
plt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='Minimum')
plt.xlabel('Dependence parameter')
plt.ylabel('Quantile value')
plt.legend(loc=0);
"""
Explanation: The quantile is lower compare to the problem of dimension 1. Indeed, there is more variables, more uncertainty, so a larger deviation of the output.
End of explanation
"""
families = np.zeros((dim, dim), dtype=int)
for i in range(1, dim):
for j in range(i):
families[i, j] = 1
quant_estimate.margins = margins
quant_estimate.families = families
quant_estimate.vine_structure = None
quant_estimate.bounds_tau = None
quant_estimate.bounds_tau
K = 100
n = 1000
grid_type = 'lhs'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)
min_result = grid_result.min_result
print('Worst Quantile: {0} at {1}'.format(min_result.quantity, min_result.dep_param))
"""
Explanation: Copula families with all dependent pairs
We consider a gaussian copula for this first example, but for the moment only one pair is dependent.
End of explanation
"""
families[3, 2] = 0
quant_estimate = ConservativeEstimate(model_func=func_sum, margins=margins, families=families)
K = 100
n = 10000
grid_type = 'lhs'
grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type,
q_func=q_func, random_state=random_state)
min_result = grid_result.min_result
print('Worst Quantile: {0} at {1}'.format(min_result.quantity, min_result.dep_param))
grid_result.vine_structure
from depimpact.plots import matrix_plot_input
matrix_plot_input(min_result)
"""
Explanation: With one fixed pair
End of explanation
"""
K = 100
n = 1000
grid_type = 'lhs'
grid_result_1 = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, save_grid=True, grid_path='./output')
grid_result_2 = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type,
q_func=q_func, use_grid=0, grid_path='./output')
"""
Explanation: Save the used grid and load it again
End of explanation
"""
grid_result_1.n_input_sample, grid_result_2.n_input_sample
grid_result = grid_result_1 + grid_result_2
"""
Explanation: Then gather the results from the same grid with the same configurations
End of explanation
"""
grid_result.n_input_sample
"""
Explanation: Because the configurations are the same, we can gather the results from two different runs
End of explanation
"""
|
nerdcommander/scientific_computing_2017
|
lesson18/Lesson18_team.ipynb
|
mit
|
import time
time.time()
"""
Explanation: Unit 3: Simulation
Lesson 18: Non-uniform distributions
Notebook Authors
(fill in your two names here)
Facilitator: (fill in name)
Spokesperson: (fill in name)
Process Analyst: (fill in name)
Quality Control: (fill in name)
If there are only three people in your group, have one person serve as both spokesperson and process analyst for the rest of this activity.
At the end of this Lesson, you will be asked to record how long each Model required for your team. The Facilitator should keep track of time for your team.
Computational Focus: Non-uniform distributions
As we saw in the previous lesson, a uniform random number distribution can be easily generated and shifted appropriately using Python. Some applications, however, may require a series of random numbers that are not distributed uniformly. For example, the probability for a system to be at particular energy level as a function of temperature is given by an exponential distribution, the velocities of an ideal gas at a particular temperature follow a Gaussian (normal) distribution, and many environmental, behavioral, and genetic data sets are modeled using these and other non-uniform distributions. This lesson will use two different procedures for creating any desired distribution of random numbers using a uniform random-number generator.
Model 1: Random seed
Random number functions are really an equation that takes a number input (referred to as the seed) and produces a new number as output. When you do not specify the seed (as we have not so far) Python the uses the current system time to set the seed.
Let's look at this briefly. Run the code below.
End of explanation
"""
## gets the time, still not very human readable
time.localtime()
## formats the time nicely
time.asctime(time.localtime())
"""
Explanation: 1. Describe the results.
Now run time.time() again below.
2. Describe the results and compare them to the first time.time() call.
Read the info on the time module here: http://www.tutorialspoint.com/python3/python_date_time.htm
3. Explain the output of the time.time() function calls (what is that number, how and why are the results of the two calls different, etc.).
Just in case you ever want to know what time it is, computers can give you a more human readable format (and if you're ever really interested, Python also has the datetime library that has a lot of super useful tools).
Run the code below.
End of explanation
"""
## series of random numbers doesn't repeat
"""
Explanation: But we digress. Back to random numbers...
The default seed for the first Python random number generator is actually set by the computer (based on the computer time). Then with each subsequent call for a new random number, the previous random number produced is used as the seed for the next number in the sequence.
4. If the next random number is generated using the one before it, why isn't the series the same every time? (answer the question, then write and run some code to demonstrate what you mean)
End of explanation
"""
|
boffi/boffi.github.io
|
dati_2015/ha03/02_Isolation.ipynb
|
mit
|
from math import atan2, cos, exp, pi, sin, sqrt, tan
def plvu(label, value, units=""):
print("%40s: %10g %s"%(label, value, units))
"""
Explanation: Vibration Isolation
Preliminaries
We have to import the mathematical functions that will be used in the following. Also, we want to define a helper function to properly format our results.
End of explanation
"""
m0 = 7100.0 # kg
p0 = 2400.0 # N
f0 = 20.0 # Hz
pr = 300.0 # N
z = 8./100 # Damping ratio
plvu("Mass of the machine", m0, "kg")
plvu("Unbalanced force", p0, "N")
plvu("Frequency of unbalanced force", f0, "Hz")
plvu("Max value of transmitted s-s force", pr, "N")
plvu("Damping ratio in second part of problem", z*100, "%")
"""
Explanation: Problem's data
The data of our problem is as follows $\def\T{\text{TR}}$
End of explanation
"""
TR = pr/p0
plvu("The required transmissibility ratio", TR)
w0 = 2*pi*f0
W0 = w0**2
plvu("Excitation circular frequency, squared", W0,"(rad/s)^2.")
"""
Explanation: Undamped system
The requested transmissibility ratio is $\T=p_{ss}/p_0$, while the transmissibility ratio of an undamped system is $1/(\beta^2-1)$, where $\beta$ is the frequency ratio, $\beta=\omega_0/\omega_n$, with $\omega_0$ and $\omega_n$ being respectively the excitation frequency and the natural frequency of the system.
At first we compute the requested transmissibility ratio and the circular frequency of the excitation, as the problem gave it in terms of Hertz.
End of explanation
"""
Wn = W0*TR/(1+TR)
plvu("System circular frequency, squared", Wn, "(rad/s)^2")
"""
Explanation: Because the system transmissibility must be no greater than the requested one we have:
\begin{align}
\T &\ge \frac{1}{\beta^2-1} = \frac{\omega_n^2}{\omega_0^2-\omega_n^2} \Rightarrow\
\omega_n^2 &\le (\omega_0^2-\omega_n^2)\,\T\Rightarrow\
\omega_n^2 &\le \frac{\T}{1+\T}\,\omega_0^2.
\end{align}
End of explanation
"""
k0 = m0*Wn
plvu("Maximum stiffness of support", k0/1E6, "kN/mm")
"""
Explanation: The natural frequency is related to the mass: $\omega_n^2=k/m$. Substituting
in the previous disequation and solving for $k$
$$k \le \frac{\T}{1+\T}\,m\,\omega_0^2=k_\text{max}$$
we obtain $k_\text{max}$, the maximum stiffness for which the transmisibility ratio is not greater than the requested one.
End of explanation
"""
plvu("Minimum static displacement", m0*9.81/k0*1000, "mm")
"""
Explanation: While we are at it, we compute also the minimum static displacement of the system,
$$ \Delta_\text{st} \ge \frac{m\,g}{k_\text{max}}.$$
End of explanation
"""
b = (TR**2-1)/TR**2
a = 2*z*z*b-1
plvu("Coefficient of linear term", a)
plvu("Constant coefficient", b)
"""
Explanation: Damped system
The transmissibility ratio is
$$\T=\sqrt\frac{1+(2\zeta\beta)^2}{(1-\beta^2)^2+(2\zeta\beta)^2}.$$
In the second part of our problem, $\zeta=0.08$ is assigned and $\T$ is known, so the above expression is a function of $\beta$, the frequency ratio. We need to know for which value of $\beta$ the above equation holds.
Squaring both members, taking all factors on the same side of the equal sign, expanding, collecting $\beta^2$ and eventually dividing by $\T^2$ we have
$${\beta^2}^2 + 2\left(
2\zeta^2\frac{\T^2-1}{\T^2}-1
\right)\beta^2 + \frac{\T^2-1}{\T^2} = 0,$$
an algebraic equation of second order in $\beta^2$.
With $a=2\zeta^2\frac{\T^2-1}{\T^2}-1$ and $b=\frac{\T^2-1}{\T^2}$
End of explanation
"""
b2 = sqrt(a*a-b)-a
plvu("Minimum value of squared freq.ratio, damped system", b2)
"""
Explanation: the positive root is given by
$$\beta^2 = \sqrt{a^2-b}-a.$$
End of explanation
"""
Wnmx = W0/b2
plvu("Max admissible value of squared natural frequency", Wnmx, "(rad/s)^2")
"""
Explanation: We have already installed the elastic support, as specified from the undamped design, we add a damper and the requested minimum frequency ratio increases... How we can increase the frequency ratio?
Decreasing the natural frequency of the system, but the springs are already in place and the only thing that we can do is to change the mass of the system, in particular we decrease the frequency by increasing the mass and this is OK, because we can always add ballast to our system while, on the other hand, it would be, in general, impossible to reduce the system mass...
The max value of the natural frequency is given by
$$\beta^2=\omega^2_0/\omega^2_n\Rightarrow \omega_n^2=\omega^2_0/\beta^2$$
End of explanation
"""
mmn = k0/Wnmx
plvu("Minimum mass of the damped system", mmn, "kg")
plvu("Minimum mass of ballast", mmn-m0, "kg")
plvu("Mass increment", (mmn-m0)*100/m0, "%")
plvu("Minimum static displacement, d.s.", 9.81*mmn/k0*1000, "mm")
# The following incantation gives a proper style to this notebook
from IPython.display import display, Latex, HTML
HTML(open("00_custom.css", "r").read())
"""
Explanation: Eventually, the minimum mass of the damped system is
$$m=k/\omega_n^2.$$
We compute also the minimum mass of the ballast, the percent mass increment and the new static displacement.
End of explanation
"""
|
tensorflow/docs-l10n
|
site/zh-cn/lattice/tutorials/premade_models.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
#@test {"skip": true}
!pip install tensorflow-lattice pydot
"""
Explanation: TF Lattice 预制模型
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/lattice/tutorials/premade_models"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/lattice/tutorials/premade_models.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/lattice/tutorials/premade_models.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/lattice/tutorials/premade_models.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png"> 下载笔记本</a></td>
</table>
概述
利用预制模型,您可以快速轻松地针对典型用例构建 TFL tf.keras.model 实例。本指南概述了构造 TFL 预制模型并对其进行训练/测试所需的步骤。
设置
安装 TF Lattice 软件包:
End of explanation
"""
import tensorflow as tf
import copy
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)[ASDF]
"""
Explanation: 导入所需的软件包:
End of explanation
"""
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
df = pd.read_csv(csv_file)
train_size = int(len(df) * 0.8)
train_dataframe = df[:train_size]
test_dataframe = df[train_size:]
df.head()[ASDF]
"""
Explanation: 下载 UCI Statlog (Heart) 数据集:
End of explanation
"""
# Features:
# - age
# - sex
# - cp chest pain type (4 values)
# - trestbps resting blood pressure
# - chol serum cholestoral in mg/dl
# - fbs fasting blood sugar > 120 mg/dl
# - restecg resting electrocardiographic results (values 0,1,2)
# - thalach maximum heart rate achieved
# - exang exercise induced angina
# - oldpeak ST depression induced by exercise relative to rest
# - slope the slope of the peak exercise ST segment
# - ca number of major vessels (0-3) colored by flourosopy
# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
'age', 'sex', 'cp', 'chol', 'fbs', 'trestbps', 'thalach', 'restecg',
'exang', 'oldpeak', 'slope', 'ca', 'thal'
]
feature_name_indices = {name: index for index, name in enumerate(feature_names)}
# This is the vocab list and mapping we will use for the 'thal' categorical
# feature.
thal_vocab_list = ['normal', 'fixed', 'reversible']
thal_map = {category: i for i, category in enumerate(thal_vocab_list)}[ASDF]
# Custom function for converting thal categories to buckets
def convert_thal_features(thal_features):
# Note that two examples in the test set are already converted.
return np.array([
thal_map[feature] if feature in thal_vocab_list else feature
for feature in thal_features
])
# Custom function for extracting each feature.
def extract_features(dataframe,
label_name='target',
feature_names=feature_names):
features = []
for feature_name in feature_names:
if feature_name == 'thal':
features.append(
convert_thal_features(dataframe[feature_name].values).astype(float))
else:
features.append(dataframe[feature_name].values.astype(float))
labels = dataframe[label_name].values.astype(float)
return features, labels
train_xs, train_ys = extract_features(train_dataframe)
test_xs, test_ys = extract_features(test_dataframe)
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
# Our lattice models may have predictions above 1.0 due to numerical errors.
# We can subtract this small epsilon value from our output_max to make sure we
# do not predict values outside of our label bound.
numerical_error_epsilon = 1e-5
"""
Explanation: 提取特征和标签并将它们转换为张量:
End of explanation
"""
LEARNING_RATE = 0.01
BATCH_SIZE = 128
NUM_EPOCHS = 500
PREFITTING_NUM_EPOCHS = 10
"""
Explanation: 设置用于在本指南中进行训练的默认值:
End of explanation
"""
def compute_quantiles(features,
num_keypoints=10,
clip_min=None,
clip_max=None,
missing_value=None):
# Clip min and max if desired.
if clip_min is not None:
features = np.maximum(features, clip_min)
features = np.append(features, clip_min)
if clip_max is not None:
features = np.minimum(features, clip_max)
features = np.append(features, clip_max)
# Make features unique.
unique_features = np.unique(features)
# Remove missing values if specified.
if missing_value is not None:
unique_features = np.delete(unique_features,
np.where(unique_features == missing_value))
# Compute and return quantiles over unique non-missing feature values.
return np.quantile(
unique_features,
np.linspace(0., 1., num=num_keypoints),
interpolation='nearest').astype(float)
"""
Explanation: 特征配置
使用 tfl.configs.FeatureConfig 设置特征校准和按特征的配置。特征配置包括单调性约束、按特征的正则化(请参阅 tfl.configs.RegularizerConfig)以及点阵模型的点阵大小。
请注意,我们必须为希望模型识别的任何特征完全指定特征配置。否则,模型将无法获知存在这样的特征。
计算分位数
尽管 tfl.configs.FeatureConfig 中 pwl_calibration_input_keypoints 的默认设置为“分位数”,但对于预制模型,我们必须手动定义输入关键点。为此,我们首先定义自己的辅助函数来计算分位数。
End of explanation
"""
# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
tfl.configs.FeatureConfig(
name='age',
lattice_size=3,
monotonicity='increasing',
# We must set the keypoints manually.
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['age']],
num_keypoints=5,
clip_max=100),
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),
],
),
tfl.configs.FeatureConfig(
name='sex',
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='cp',
monotonicity='increasing',
# Keypoints that are uniformly spaced.
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=np.linspace(
np.min(train_xs[feature_name_indices['cp']]),
np.max(train_xs[feature_name_indices['cp']]),
num=4),
),
tfl.configs.FeatureConfig(
name='chol',
monotonicity='increasing',
# Explicit input keypoints initialization.
pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],
# Calibration can be forced to span the full output range by clamping.
pwl_calibration_clamp_min=True,
pwl_calibration_clamp_max=True,
# Per feature regularization.
regularizer_configs=[
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),
],
),
tfl.configs.FeatureConfig(
name='fbs',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='trestbps',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['trestbps']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='thalach',
monotonicity='decreasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['thalach']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='restecg',
# Partial monotonicity: output(0) <= output(1), output(0) <= output(2)
monotonicity=[(0, 1), (0, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='exang',
# Partial monotonicity: output(0) <= output(1)
monotonicity=[(0, 1)],
num_buckets=2,
),
tfl.configs.FeatureConfig(
name='oldpeak',
monotonicity='increasing',
pwl_calibration_num_keypoints=5,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['oldpeak']], num_keypoints=5),
),
tfl.configs.FeatureConfig(
name='slope',
# Partial monotonicity: output(0) <= output(1), output(1) <= output(2)
monotonicity=[(0, 1), (1, 2)],
num_buckets=3,
),
tfl.configs.FeatureConfig(
name='ca',
monotonicity='increasing',
pwl_calibration_num_keypoints=4,
pwl_calibration_input_keypoints=compute_quantiles(
train_xs[feature_name_indices['ca']], num_keypoints=4),
),
tfl.configs.FeatureConfig(
name='thal',
# Partial monotonicity:
# output(normal) <= output(fixed)
# output(normal) <= output(reversible)
monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],
num_buckets=3,
# We must specify the vocabulary list in order to later set the
# monotonicities since we used names and not indices.
vocabulary_list=thal_vocab_list,
),
][ASDF]
"""
Explanation: 定义我们的特征配置
现在我们可以计算分位数了,我们为希望模型将其作为输入的每个特征定义一个特征配置。
End of explanation
"""
tfl.premade_lib.set_categorical_monotonicities(feature_configs)
"""
Explanation: 接下来,我们需要确保为使用自定义词汇表的特征(例如上面的“thal”)正确设置单调性。
End of explanation
"""
# Model config defines the model structure for the premade model.
linear_model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs[:5],
use_bias=True,
# We must set the output min and max to that of the label.
output_min=min_label,
output_max=max_label,
output_calibration=True,
output_calibration_num_keypoints=10,
output_initialization=np.linspace(min_label, max_label, num=10),
regularizer_configs=[
# Regularizer for the output calibrator.
tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),
])
# A CalibratedLinear premade model constructed from the given model config.
linear_model = tfl.premade.CalibratedLinear(linear_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(linear_model, show_layer_names=False, rankdir='LR')
"""
Explanation: 校准线性模型
要构造 TFL 预制模型,请首先从 tfl.configs 构造模型配置。使用 tfl.configs.CalibratedLinearConfig 构造校准线性模型。此模型会将分段线性和分类校准应用于输入特征,随后应用线性组合和可选的输出分段线性校准。使用输出校准或指定输出边界时,线性层会将加权平均应用于校准的输入。
下面的示例将基于前 5 个特征创建一个校准线性模型。
End of explanation
"""
linear_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
linear_model.fit(
train_xs[:5],
train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)[ASDF]
"""
Explanation: 现在,与任何其他 tf.keras.Model 一样,我们编译该模型并将其拟合到我们的数据中。
End of explanation
"""
print('Test Set Evaluation...')
print(linear_model.evaluate(test_xs[:5], test_ys))[ASDF]
"""
Explanation: 训练完模型后,我们可以在测试集中对其进行评估。
End of explanation
"""
# This is a calibrated lattice model: inputs are calibrated, then combined
# non-linearly using a lattice layer.
lattice_model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=feature_configs[:5],
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
regularizer_configs=[
# Torsion regularizer applied to the lattice to make it more linear.
tfl.configs.RegularizerConfig(name='torsion', l2=1e-2),
# Globally defined calibration regularizer is applied to all features.
tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-2),
])
# A CalibratedLattice premade model constructed from the given model config.
lattice_model = tfl.premade.CalibratedLattice(lattice_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(lattice_model, show_layer_names=False, rankdir='LR')
"""
Explanation: 校准点阵模型
使用 tfl.configs.CalibratedLatticeConfig 构造校准点阵模型。校准点阵模型会将分段线性和分类校准应用于输入特征,随后应用点阵模型和可选的输出分段线性校准。
下面的示例将基于前 5 个特征创建一个校准点阵模型。
End of explanation
"""
lattice_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
lattice_model.fit(
train_xs[:5],
train_ys,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
print('Test Set Evaluation...')
print(lattice_model.evaluate(test_xs[:5], test_ys))[ASDF]
"""
Explanation: 和以前一样,我们编译、拟合并评估我们的模型。
End of explanation
"""
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
explicit_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices=[['trestbps', 'chol', 'ca'], ['fbs', 'restecg', 'thal'],
['fbs', 'cp', 'oldpeak'], ['exang', 'slope', 'thalach'],
['restecg', 'age', 'sex']],
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label])
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
explicit_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
explicit_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
explicit_ensemble_model, show_layer_names=False, rankdir='LR')
"""
Explanation: 校准点阵集成模型
当特征数量很大时,可以使用集成模型,这种模型会为特征的子集创建多个较小的点阵并计算它们的输出平均值,而不是仅创建单个巨大的点阵。使用 tfl.configs.CalibratedLatticeEnsembleConfig 构造集成点阵模型。校准点阵集成模型会将分段线性和分类校准应用于输入特征,随后应用点阵模型的集成和可选的输出分段线性校准。
显式点阵集成初始化
如果您已经知道要将哪些特征子集馈入点阵,则可以使用特征名称显式设置点阵。下面的示例将创建一个校准点阵集成模型,此模型具有 5 个点阵,每个点阵有 3 个特征。
End of explanation
"""
explicit_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
explicit_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(explicit_ensemble_model.evaluate(test_xs, test_ys))
"""
Explanation: 和以前一样,我们编译、拟合并评估我们的模型。
End of explanation
"""
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
random_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='random',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now we must set the random lattice structure and construct the model.
tfl.premade_lib.set_random_lattice_ensemble(random_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
random_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
random_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
random_ensemble_model, show_layer_names=False, rankdir='LR')
"""
Explanation: 随机点阵集成
如果您不确定要将哪些特征子集馈入点阵,则另一个选择是为每个点阵使用随机的特征子集。下面的示例将创建一个校准点阵集成模型,此模型具有 5 个点阵,每个点阵有 3 个特征。
End of explanation
"""
random_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
random_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(random_ensemble_model.evaluate(test_xs, test_ys))
"""
Explanation: 和以前一样,我们编译、拟合并评估我们的模型。
End of explanation
"""
# Make sure our feature configs have the same lattice size, no per-feature
# regularization, and only monotonicity constraints.
rtl_layer_feature_configs = copy.deepcopy(feature_configs)
for feature_config in rtl_layer_feature_configs:
feature_config.lattice_size = 2
feature_config.unimodality = 'none'
feature_config.reflects_trust_in = None
feature_config.dominates = None
feature_config.regularizer_configs = None
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combined non-linearly and averaged using multiple lattice layers.
rtl_layer_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=rtl_layer_feature_configs,
lattices='rtl_layer',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config. Note that we do not have to specify the lattices by calling
# a helper function (like before with random) because the RTL Layer will take
# care of that for us.
rtl_layer_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
rtl_layer_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
rtl_layer_ensemble_model, show_layer_names=False, rankdir='LR')
"""
Explanation: RTL 层随机点阵集成
使用随机点阵集成时,可以指定模型使用单个 tfl.layers.RTL 层。我们注意到,tfl.layers.RTL 仅支持单调性约束,对于所有特征都必须具有相同的点阵大小,并且不包含按特征的正则化。请注意,与使用单独的 tfl.layers.Lattice 实例相比,使用 tfl.layers.RTL 层可使您扩展到更大的集成。
下面的示例将创建一个校准点阵集成模型,此模型具有 5 个点阵,每个点阵有 3 个特征。
End of explanation
"""
rtl_layer_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
rtl_layer_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(rtl_layer_ensemble_model.evaluate(test_xs, test_ys))
"""
Explanation: 和以前一样,我们编译、拟合并评估我们的模型。
End of explanation
"""
# This is a calibrated lattice ensemble model: inputs are calibrated, then
# combines non-linearly and averaged using multiple lattice layers.
crystals_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(
feature_configs=feature_configs,
lattices='crystals',
num_lattices=5,
lattice_rank=3,
output_min=min_label,
output_max=max_label - numerical_error_epsilon,
output_initialization=[min_label, max_label],
random_seed=42)
# Now that we have our model config, we can construct a prefitting model config.
prefitting_model_config = tfl.premade_lib.construct_prefitting_model_config(
crystals_ensemble_model_config)
# A CalibratedLatticeEnsemble premade model constructed from the given
# prefitting model config.
prefitting_model = tfl.premade.CalibratedLatticeEnsemble(
prefitting_model_config)
# We can compile and train our prefitting model as we like.
prefitting_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
prefitting_model.fit(
train_xs,
train_ys,
epochs=PREFITTING_NUM_EPOCHS,
batch_size=BATCH_SIZE,
verbose=False)
# Now that we have our trained prefitting model, we can extract the crystals.
tfl.premade_lib.set_crystals_lattice_ensemble(crystals_ensemble_model_config,
prefitting_model_config,
prefitting_model)
# A CalibratedLatticeEnsemble premade model constructed from the given
# model config.
crystals_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(
crystals_ensemble_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
crystals_ensemble_model, show_layer_names=False, rankdir='LR')
"""
Explanation: 晶体点阵集成
预制模型还提供了一种称为晶体的启发式特征排列算法。要使用晶体算法,首先训练一个预拟合模型,此模型会估算成对的特征交互。然后,它将对最终集成进行排列,使具有更多非线性交互的特征处于同一点阵中。
预制库提供了一些辅助函数,用于构造预拟合模型配置以及提取晶体结构。请注意,预拟合模型不需要完全训练,因此几个周期便已足够。
下面的示例将创建一个校准点阵集成模型,此模型具有 5 个点阵,每个点阵有 3 个特征。
End of explanation
"""
crystals_ensemble_model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.AUC()],
optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
crystals_ensemble_model.fit(
train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
print('Test Set Evaluation...')
print(crystals_ensemble_model.evaluate(test_xs, test_ys))
"""
Explanation: 和以前一样,我们编译、拟合并评估我们的模型。
End of explanation
"""
|
icoxfog417/number_recognizer
|
machines/number_recognizer/number_recognizer.ipynb
|
mit
|
# グラフが文章中に表示されるようにするおまじない
%matplotlib inline
"""
Explanation: Number Recognizer
今回は、ブラウザ上に書いた手書きの数字を認識させます。具体的には、canvasに書かれた数字が0~9のどれであるかを当てさせます。
その予測を行うためのモデルを、以下のステップに沿って作成していきます。
データロード
モデル構築
学習
評価
保存
End of explanation
"""
def load_data():
from sklearn import datasets
dataset = datasets.load_digits()
return dataset
digits = load_data()
print(digits.data.shape)
"""
Explanation: Load the Data
scikit-learnには、最初から手書きの数字を認識するための学習データ(手書き数字の画像データと、その画像の数字が0~9の何れであるかという答えのセット)が搭載されているため、それを利用します。
End of explanation
"""
def show_image(image):
import matplotlib.pyplot as plt
plt.figure(1, figsize=(3, 3))
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
show_image(digits.images[0])
"""
Explanation: 1797は行数、64は次元数です。手書き文字の画像データが8×8のサイズであるため、その中のピクセル情報は64となります(今回値はグレースケールですが、RGBの場合3倍になります)。
End of explanation
"""
def make_model():
from sklearn.linear_model import SGDClassifier
clf = SGDClassifier(alpha=0.0001, fit_intercept=True, n_iter=200)
return clf
classifier = make_model()
"""
Explanation: Create the Model
今回扱うのは画像の分類問題になります。そこで、シンプルな線形分類機であるSGDClassifierを利用します。
End of explanation
"""
classifier.fit(digits.data, digits.target)
"""
Explanation: Training the Model
データとモデルがそろったため、学習させてみます。
End of explanation
"""
def calculate_accuracy(model, dataset):
from sklearn import metrics
predicted = model.predict(dataset.data)
score = metrics.accuracy_score(dataset.target, predicted)
return score
print(calculate_accuracy(classifier, digits))
"""
Explanation: Evaluate the Model
学習させたモデルの精度を計測してみます。predictで予測させることができるので、これで予測させた値と実際の答え(digits.target)を比べてみます。
End of explanation
"""
from sklearn.externals import joblib
joblib.dump(classifier, "./machine.pkl")
"""
Explanation: Store the Model
最後に、学習させたモデルを保存します。アプリケーション側で、その結果を確認してみてください。
End of explanation
"""
|
grantvk/aima-python
|
rl.ipynb
|
mit
|
from rl import *
"""
Explanation: Reinforcement Learning
This IPy notebook acts as supporting material for Chapter 21 Reinforcement Learning of the book Artificial Intelligence: A Modern Approach. This notebook makes use of the implementations in rl.py module. We also make use of implementation of MDPs in the mdp.py module to test our agents. It might be helpful if you have already gone through the IPy notebook dealing with Markov decision process. Let us import everything from the rl module. It might be helpful to view the source of some of our implementations. Please refer to the Introductory IPy file for more details.
End of explanation
"""
%psource PassiveTDAgent
"""
Explanation: Review
Before we start playing with the actual implementations let us review a couple of things about RL.
Reinforcement Learning is concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward.
Reinforcement learning differs from standard supervised learning in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Further, there is a focus on on-line performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).
-- Source: Wikipedia
In summary we have a sequence of state action transitions with rewards associated with some states. Our goal is to find the optimal policy (pi) which tells us what action to take in each state.
Passive Reinforcement Learning
In passive Reinforcement Learning the agent follows a fixed policy and tries to learn the Reward function and the Transition model (if it is not aware of that).
Passive Temporal Difference Agent
The PassiveTDAgent class in the rl module implements the Agent Program (notice the usage of word Program) described in Fig 21.4 of the AIMA Book. PassiveTDAgent uses temporal differences to learn utility estimates. In simple terms we learn the difference between the states and backup the values to previous states while following a fixed policy. Let us look into the source before we see some usage examples.
End of explanation
"""
from mdp import sequential_decision_environment
sequential_decision_environment
"""
Explanation: The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the call method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a policy(pi) and a mdp whose utility of states will be estimated. Let us import a GridMDP object from the mdp module. Figure 17.1 (sequential_decision_environment) is similar to Figure 21.1 but has some discounting as gamma = 0.9.
End of explanation
"""
# Action Directions
north = (0, 1)
south = (0,-1)
west = (-1, 0)
east = (1, 0)
policy = {
(0, 2): east, (1, 2): east, (2, 2): east, (3, 2): None,
(0, 1): north, (2, 1): north, (3, 1): None,
(0, 0): north, (1, 0): west, (2, 0): west, (3, 0): west,
}
"""
Explanation: Figure 17.1 (sequential_decision_environment) is a GridMDP object and is similar to the grid shown in Figure 21.1. The rewards in the terminal states are +1 and -1 and -0.04 in rest of the states. <img src="files/images/mdp.png"> Now we define a policy similar to Fig 21.1 in the book.
End of explanation
"""
our_agent = PassiveTDAgent(policy, sequential_decision_environment, alpha=lambda n: 60./(59+n))
"""
Explanation: Let us create our object now. We also use the same alpha as given in the footnote of the book on page 837.
End of explanation
"""
from mdp import value_iteration
"""
Explanation: The rl module also has a simple implementation to simulate iterations. The function is called run_single_trial. Now we can try our implementation. We can also compare the utility estimates learned by our agent to those obtained via value iteration.
End of explanation
"""
print(value_iteration(sequential_decision_environment))
"""
Explanation: The values calculated by value iteration:
End of explanation
"""
for i in range(200):
run_single_trial(our_agent,sequential_decision_environment)
print(our_agent.U)
"""
Explanation: Now the values estimated by our agent after 200 trials.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
def graph_utility_estimates(agent_program, mdp, no_of_iterations, states_to_graph):
graphs = {state:[] for state in states_to_graph}
for iteration in range(1,no_of_iterations+1):
run_single_trial(agent_program, mdp)
for state in states_to_graph:
graphs[state].append((iteration, agent_program.U[state]))
for state, value in graphs.items():
state_x, state_y = zip(*value)
plt.plot(state_x, state_y, label=str(state))
plt.ylim([0,1.2])
plt.legend(loc='lower right')
plt.xlabel('Iterations')
plt.ylabel('U')
"""
Explanation: We can also explore how these estimates vary with time by using plots similar to Fig 21.5a. To do so we define a function to help us with the same. We will first enable matplotlib using the inline backend.
End of explanation
"""
agent = PassiveTDAgent(policy, sequential_decision_environment, alpha=lambda n: 60./(59+n))
graph_utility_estimates(agent, sequential_decision_environment, 500, [(2,2)])
"""
Explanation: Here is a plot of state (2,2).
End of explanation
"""
graph_utility_estimates(agent, sequential_decision_environment, 500, [(2,2), (3,2)])
"""
Explanation: It is also possible to plot multiple states on the same plot.
End of explanation
"""
%psource QLearningAgent
"""
Explanation: Active Reinforcement Learning
Unlike Passive Reinforcement Learning in Active Reinforcement Learning we are not bound by a policy pi and we need to select our actions. In other words the agent needs to learn an optimal policy. The fundamental tradeoff the agent needs to face is that of exploration vs. exploitation.
QLearning Agent
The QLearningAgent class in the rl module implements the Agent Program described in Fig 21.8 of the AIMA Book. In Q-Learning the agent learns an action-value function Q which gives the utility of taking a given action in a particular state. Q-Learning does not required a transition model and hence is a model free method. Let us look into the source before we see some usage examples.
End of explanation
"""
q_agent = QLearningAgent(sequential_decision_environment, Ne=5, Rplus=2,
alpha=lambda n: 60./(59+n))
"""
Explanation: The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the call method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a mdp similar to the PassiveTDAgent.
Let us use the same GridMDP object we used above. Figure 17.1 (sequential_decision_environment) is similar to Figure 21.1 but has some discounting as gamma = 0.9. The class also implements an exploration function f which returns fixed Rplus untill agent has visited state, action Ne number of times. This is the same as the one defined on page 842 of the book. The method actions_in_state returns actions possible in given state. It is useful when applying max and argmax operations.
Let us create our object now. We also use the same alpha as given in the footnote of the book on page 837. We use Rplus = 2 and Ne = 5 as defined on page 843. Fig 21.7
End of explanation
"""
for i in range(200):
run_single_trial(q_agent,sequential_decision_environment)
"""
Explanation: Now to try out the q_agent we make use of the run_single_trial function in rl.py (which was also used above). Let us use 200 iterations.
End of explanation
"""
q_agent.Q
"""
Explanation: Now let us see the Q Values. The keys are state-action pairs. Where differnt actions correspond according to:
north = (0, 1)
south = (0,-1)
west = (-1, 0)
east = (1, 0)
End of explanation
"""
U = defaultdict(lambda: -1000.) # Very Large Negative Value for Comparison see below.
for state_action, value in q_agent.Q.items():
state, action = state_action
if U[state] < value:
U[state] = value
U
"""
Explanation: The Utility U of each state is related to Q by the following equation.
U (s) = max <sub>a</sub> Q(s, a)
Let us convert the Q Values above into U estimates.
End of explanation
"""
print(value_iteration(sequential_decision_environment))
"""
Explanation: Let us finally compare these estimates to value_iteration results.
End of explanation
"""
|
dsbrown1331/CoRL2019-DREX
|
drex-mujoco/learner/baselines/docs/viz/viz.ipynb
|
mit
|
!pip install git+https://github.com/openai/baselines > ~/pip_install_baselines.log
"""
Explanation: Loading and visualizing results (open in colab)
In order to compare performance of algorithms, we often would like to visualize learning curves (reward as a function of time steps), or some other auxiliary information about learning aggregated into a plot. Baselines repo provides tools for doing so in several different ways, depending on the goal.
Preliminaries / TensorBoard
First, let us install baselines repo from github
End of explanation
"""
!OPENAI_LOGDIR=$HOME/logs/cartpole-ppo OPENAI_LOG_FORMAT=csv python -m baselines.run --alg=ppo2 --env=CartPole-v0 --num_timesteps=30000 --nsteps=128
"""
Explanation: For all algorithms in baselines summary data is saved into a folder defined by logger. By default, a folder $TMPDIR/openai-<date>-<time> is used; you can see the location of logger directory at the beginning of the training in the message like this:
Logging to /var/folders/mq/tgrn7bs17s1fnhlwt314b2fm0000gn/T/openai-2018-10-29-15-03-13-537078
The location can be changed by changing OPENAI_LOGDIR environment variable. For instance,
End of explanation
"""
from baselines.common import plot_util as pu
results = pu.load_results('~/logs/cartpole-ppo')
"""
Explanation: Notice also the usage of OPENAI_LOG_FORMAT environment variable - this allows us to suppress printing to stdout.
Permissible values for OPENAI_LOG_FORMAT environment variables are stdout, log, csv, and tensorboard (multiple values can be comma-separated).
The latter one (tensorboard) dumps values in tensorboard format, which allows for their viewing in TensorBoard. TensorBoard provides nice visualization and in many cases is the easiest way to look at the summary of the results. However, if tensorboard visualization is not sufficient, read on...
Loading results and plotting using matplotlib
Bаselines provides helper functions to load the summaries of the results as pandas dataframes.
For instance, the following:
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
r = results[0]
plt.plot(np.cumsum(r.monitor.l), r.monitor.r)
"""
Explanation: will search for all folders with baselines-compatible results in ~/logs/cartpole-ppo and subfolders and return a list of Result objects. Each Result object is a named tuple with the following fields:
dirname: str - name of the folder from which data was loaded
metadata: dict) - dictionary with various metadata (read from metadata.json file)
progress: pandas.DataFrame - tabular data saved by logger as a pandas dataframe. Available if csv is in logger formats.
monitor: pandas.DataFrame - raw episode data (length, episode reward, timestamp). Available if environment wrapped with Monitor wrapper
Thus, a learning curve from a single run can be plotted as follows (note the cumulative sum trick to get convert lengths of the episodes into number of time steps taken so far)
End of explanation
"""
plt.plot(np.cumsum(r.monitor.l), pu.smooth(r.monitor.r, radius=10))
"""
Explanation: The raw learning curve from a single run can be very noisy. To smoothen it and analyze if any learning actually takes place, we can use plot_util.smooth() function:
End of explanation
"""
plt.plot(r.progress.total_timesteps, r.progress.eprewmean)
"""
Explanation: Similar curve can be obtained by using logger summaries (instead of raw episode data in monitor.csv):
End of explanation
"""
!for seed in $(seq 0 5); do OPENAI_LOG_FORMAT=csv OPENAI_LOGDIR=$HOME/logs/cartpole-ppo-hp/b32-$seed python -m baselines.run --alg=ppo2 --env=CartPole-v0 --num_timesteps=3e4 --seed=$seed --nsteps=32; done
!for seed in $(seq 0 5); do OPENAI_LOG_FORMAT=csv OPENAI_LOGDIR=$HOME/logs/cartpole-ppo-hp/b128-$seed python -m baselines.run --alg=ppo2 --env=CartPole-v0 --num_timesteps=3e4 --seed=$seed --nsteps=128; done
"""
Explanation: Note, however, that raw episode data is stored by the Monitor wrapper, and hence looks similar for all algorithms, whereas progress data is handled by the algorithm itself, and hence can vary (column names, type of data available) between algorithms.
Plotting: many curves
While the loading and plotting functions described above in principle give you access to any slice of the training summaries, sometimes it is necessary to plot and compare many training runs (multiple algorithms, multiple seeds for random number generator), and usage of the functions above can get tedious and messy. For that case, baselines.common.plot_util provides convenience function plot_results that handles multiple Result objects that need to be routed in multiple plots. Consider the following bash snippet that runs ppo2 with cartpole with 6 different seeds for 30k time steps, first with rollout batch size 32, and then with batch size 128 (note that the next cell will take a little while to run):
End of explanation
"""
results = pu.load_results('~/logs/cartpole-ppo-hp'); print(len(results))
"""
Explanation: The results of 12 runs from the cell above can be loaded just as before, via (we discard first result that is actually from the very first run in the previous section):
End of explanation
"""
pu.plot_results(results)
"""
Explanation: But how do we plot all 12 of them in a sensible manner? baselines.common.plot_util module provides plot_results function to do just that:
End of explanation
"""
pu.plot_results(results, average_group=True)
"""
Explanation: The results are split into two groups based on batch size and are plotted on a separate graph. More specifically, by default plot_results considers digits after dash at the end of the directory name to be seed id and groups the runs that differ only by those together.
Showing all seeds on the same plot may be somewhat hard to comprehend and analyse. We can instead average over all seeds via the following command:
End of explanation
"""
pu.plot_results(results, average_group=True, split_fn=lambda _: '')
"""
Explanation: The lighter shade shows the standard deviation of data, and darker shade - error in estimate of the mean (that is, standard deviation divided by square root of number of seeds). Note that averaging over seeds requires resampling to a common grid, which, in turn, requires smoothing (using language of signal processing, we need to do low-pass filtering before resampling to avoid aliasing effects). You can change the amount of smoothing by adjusting resample and smooth_step arguments to achieve desired smoothing effect See the docstring of plot_util function for more info.
To plot both groups on the same graph, we can use the following:
End of explanation
"""
pu.plot_results(results, average_group=True, split_fn=lambda _: '', shaded_std=False)
"""
Explanation: Option split_fn=labmda _:'' effectively disables splitting, so that all curves end up on the same panel.
Now, with many groups the overlapping shaded regions may start looking messy. We can disable either light shaded region (corresponding to standard deviation of the curves in the group) or darker shaded region (corresponding to the error in mean estimate) by using shaded_std=False or shaded_err=False options respectively. For instance,
End of explanation
"""
|
dandtaylor/MetroShare
|
bike_station_locations.ipynb
|
mit
|
import pickle
import xml.etree.ElementTree as ET
import urllib.request
"""
Explanation: Import and save locations of bikeshare stations
End of explanation
"""
xml_path = 'https://feeds.capitalbikeshare.com/stations/stations.xml'
tree = ET.parse(urllib.request.urlopen(xml_path))
root = tree.getroot()
"""
Explanation: Import Capital Bikeshare station information .xml file
End of explanation
"""
station_location = dict()
for child in root:
tmp_lst = [float(child[4].text), float(child[5].text)]
station_location[child[1].text] = tmp_lst
station_location['10th & E St NW']
"""
Explanation: create dictionary of bikeshare station (key) and its location (value)
be sure to convert location data into floats
End of explanation
"""
pickle.dump( station_location, open( "bike_location.p", "wb" ) )
"""
Explanation: save dictionary of bikeshare stations to pickle file
End of explanation
"""
|
Se7ge/mlhep2015_starterkit
|
MLHEP 2015 starterkit.ipynb
|
mit
|
! pwd
hits_train = pd.read_csv("mlhep2015_starterkit/data/train.csv", index_col='global_id')
hits_train.head()
hits_test = pd.read_csv("mlhep2015_starterkit/data/test.csv", index_col='global_id')
hits_test.head()
"""
Explanation: The data from Kaggle is already here in the "data" folder. Let's take a look at it.
End of explanation
"""
set(hits_train.loc[(hits_train.energy_deposit == 0)].label)
"""
Explanation: Naive manual analysis
Obviously a not-so-good algorithm, used primaraly for illustrating IPython
First, check whether a a signal wire can have energy_deposit = 0
End of explanation
"""
candidates = hits_train.loc[(hits_train.energy_deposit > 0)]
"""
Explanation: It can't! So far so good.
End of explanation
"""
plot_sample_indices = np.random.choice(np.arange(len(candidates)), size=50000)
hits_to_plot = candidates.iloc[plot_sample_indices]
fig, ax = plt.subplots()
signal_hits = hits_to_plot.loc[(hits_to_plot.label == 1)]
noise_hits = hits_to_plot.loc[(hits_to_plot.label == 2)]
ax.scatter(noise_hits.energy_deposit, noise_hits.relative_time, c='b', edgecolors='none', alpha=0.3)
ax.scatter(signal_hits.energy_deposit, signal_hits.relative_time, c='r', edgecolors='none')
ax.set_xscale('log')
ax.set_xlim(1e-9, 1e-2)
ax.set_xlabel("energy_deposit")
ax.set_ylabel("relative_time")
"""
Explanation: Try plotting time vs. energy vs. label. It's too big, so we'll take a sample.
End of explanation
"""
fig, ax = plt.subplots()
ax.scatter(np.log(noise_hits.energy_deposit)**2, noise_hits.relative_time**2, c='b', edgecolors='none', alpha=0.3)
ax.scatter(np.log(signal_hits.energy_deposit)**2, signal_hits.relative_time**2, c='r', edgecolors='none')
high_relative_time = 1.35e6
low_relative_time = 256300
low_points = np.array([[160, 0], [194, low_relative_time], [229, low_relative_time], [200, 0]])
high_points = np.array([[164, 1.4e6], [195, 1.4e6], [195, high_relative_time], [164, high_relative_time],
[164, 1.4e6]])
ax.plot(low_points[:, 0], low_points[:, 1], 'g', lw=3)
ax.plot(high_points[:, 0], high_points[:, 1], 'g', lw=3)
ax.set_xlabel(r"$\log(\mathrm{energy\_deposit})^2$")
ax.set_ylabel(r"$\mathrm{relative\_time}^2$")
top_line_coeffs = np.polyfit(low_points[0:2, 0], low_points[0:2, 1], deg=1)
bottom_line_coeffs = np.polyfit(low_points[2:4, 0], low_points[2:4, 1], deg=1)
def is_signal(event):
log_energy_squared = np.log(event.energy_deposit)**2
relative_time_squared = event.relative_time**2
return (((relative_time_squared < low_relative_time) & (
relative_time_squared < np.poly1d(top_line_coeffs)(log_energy_squared)) & (
relative_time_squared > np.poly1d(bottom_line_coeffs)(log_energy_squared))) |
((relative_time_squared > high_relative_time) &
(log_energy_squared > 164) & (log_energy_squared < 195)))
"""
Explanation: Looks like we could use a selection rule.
End of explanation
"""
np.log(0)
hits_train.iloc[1]
is_signal(hits_train.iloc[1])
"""
Explanation: Also, np.log(0) is -inf. And it is correcly handled.
End of explanation
"""
from sklearn.metrics import roc_auc_score
hits_train_is_signal = (hits_train.label == 1)
roc_auc_score(hits_train_is_signal, is_signal(hits_train))
"""
Explanation: Check how good the model describes the data.
End of explanation
"""
prediction = pd.DataFrame({"prediction": is_signal(hits_test.loc[hits_test.energy_deposit > 0]).astype(np.int)})
prediction.to_csv("naive_manual_prediction.csv", index_label='global_id')
"""
Explanation: Let's make a predicion for submission. Take note at the format: only the events with positive energy.
End of explanation
"""
from IPython.display import FileLink
FileLink("naive_manual_prediction.csv")
"""
Explanation: Download your predictions from the cluster.
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
"""
Explanation: Naive machine learning
End of explanation
"""
from sklearn.cross_validation import cross_val_score
cv_gini = cross_val_score(DecisionTreeClassifier(criterion='gini'),
hits_train[['energy_deposit', 'relative_time']].values, (hits_train.label == 1).values.astype(np.int),
scoring='roc_auc')
print(cv_gini.mean(), cv_gini.std())
cv_entropy = cross_val_score(DecisionTreeClassifier(criterion='entropy'),
hits_train[['energy_deposit', 'relative_time']].values, (hits_train.label == 1).values.astype(np.int),
scoring='roc_auc')
print(cv_entropy.mean(), cv_entropy.std())
classifier = DecisionTreeClassifier(criterion='gini')
classifier.fit(hits_train[['energy_deposit', 'relative_time']], (hits_train.label == 1))
candidates = hits_test.loc[hits_test.energy_deposit > 0]
ml_prediction = pd.DataFrame({
"prediction": classifier.predict_proba(candidates[[
'energy_deposit', 'relative_time']])[:, 1]}, index=candidates.index)
ml_prediction.to_csv("naive_ml_prediction.csv", index_label='global_id')
"""
Explanation: CV might take some time
End of explanation
"""
FileLink("naive_ml_prediction.csv")
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
rf_classifier = RandomForestClassifier(criterion='entropy')
rf_classifier.fit(hits_train[['energy_deposit', 'relative_time']], (hits_train.label == 1))
rf_candidates = hits_test.loc[hits_test.energy_deposit > 0]
rf_ml_prediction = pd.DataFrame({
"prediction": rf_classifier.predict_proba(rf_candidates[[
'energy_deposit', 'relative_time']])[:, 1]}, index=rf_candidates.index)
rf_ml_prediction.to_csv("rf_ml_prediction.csv", index_label='global_id')
FileLink("rf_ml_prediction.csv")
"""
Explanation: Moral: sometimes you can outdo simple machine learning by thinking. Corollary: the best result is achieved by combining the approaches.
End of explanation
"""
|
karimsayadi/karimsayadi.github.io
|
teaching/python2/notebooks/Exercices_220217.ipynb
|
gpl-3.0
|
import sys, os
import re
from os import listdir
from os.path import isfile, join
"""
Explanation: Exercice 1
Dans cet exercice, nous allons créer un fichier csv qui contiendra deux colonnes. La première est relative au nom du fichier et la deuxième à son identifiant. Nous allons dans une première étape parcourir l'ensemble des fichiers dans un dossier et dans une seconde étape récupérer l'identifiant à partir du nom des fichiers parcouru.
Le programme python contient une seule fonction qui prend en entrée le chemin du dossier contenant le fichier et donne en sortie un fichier csv avec deux colonnes. Nous allons utilser les modules os, csv, re et les fonctions open() et write().
Nous importons d'abord les modules dont nous avons besoin.
End of explanation
"""
def fromFileToCSV (folderpath,csvfilename) :
files = [f for f in listdir(folderpath) if isfile(join(folderpath, f))]
random.shuffle(files)
for filepath in files:
if filepath.endswith(".png"):
label = re.findall("^(\d+)_",filepath)
csvLine = filepath+","+str(label[0])
print csvLine
"""
Le with open remplace toutes ces lignes ci dessous.
myfile = open(join(folderpath,csvfilename), "a")
content = myfile.read()
content = content + "\n" + csvLine
myfile.write(content)
myfile.write("\n")
myfile.close()
"""
with open(join(folderpath,csvfilename), "a") as myfile:
myfile.write(csvLine)
myfile.write("\n")
"""
Explanation: Nous créeons ensuite notre fonction que nous appellerons fromFileToCSV. Cette fonction prend deux arguements : le chemin vers le dossier et le nom du fichier csv. La signature de la fonction est comme suit, fromFileToCSV(folderpath, csvfilename)
End of explanation
"""
def main ():
fromFileToCSV("./lines","fichier_auteur.csv")
if __name__ == '__main__':
"""
Si je veux exécuter un fichier python sur la console en prenant compte des arguements donnés dans la console.
Par exemple : >> python nomfichier.py nomDossier, nomFichierCSV.
if len(sys.argv) == 3:
fromFileToCSV(sys.argv[1],sys.argv[2])
"""
main()
"""
Explanation: La variable files est une liste qui le nom de tous les fichiers sous le chemin stocker dans la variable folderpath.
Nous utilisons random.shuffle(files) pour mélanger aléatoirement la position de chaque nom de fichiers dans la liste files.
Nous parcourons la liste files avec un boucle for et pour chaque fichier avec l'extention .png nous récupérons dans la variable label le premier caractére numérique qui est présent dans le nom du fichier filepath.
Nous initialisons la variable csvLine avec le nom du fichier et le caractére numérique récupéré avec l'expression réculière "^(\d+)_".
Nous ouvrons un fichier csv fourni en argument à la fonction et écrivant la ligne cotenu dans csvLine.
End of explanation
"""
import sys, os
import shutil
import re
import random
from os import listdir
from os.path import isfile, join
def file_to_files (original_file_path):
if not os.path.exists("lines_folder"):
os.makedirs("lines_folder")
file_counter = 0
my_file = open(original_file_path)
lines_liste = my_file.readlines()
for line in lines_liste:
file_counter += 1
my_new_file = open("lines_folder/"+str(file_counter)+'_processed_tweet.txt', 'a')
my_new_file.write(line)
my_new_file.close()
my_file.close()
new_folder = "lines_folder"
return new_folder
"""
Explanation: Exercice 2
Dans cet exercice, nous allons construire un corpus ou une collection de documents à partir d'un fichier texte. Ce fichier contient plusieurs lignes qui correspondent à des tweets. D'abord, et après avoir ouvert le fichier, pour chaque ligne dans ce dernier nous allons créer un nouveau fichier. Cette étape nous donnera un dossier contenant un nombre de fichiers égal au nombre de ligne dans le fichier d'origine. Ensuite, et suivant une certaine proportion que nous allons fournir comme paramètre d'entrée nous allons diviser l'ensemble de fichiers en trois dossiers.
Le programme contenient deux fonctions : la première prendra en entrée le fichier d'origine et donnera en sortie un dossier avec un nombre de fichiers égales au nombre de lignes dans le fichier d'origine. La deuxième fonction prendra comme entrée le chemin relatif ou absolu du dossier fraîchement créé ainsi que trois proportions. C'est-à-dire que la deuxième fonction donnera en sortie trois dossiers avec par exemple 20% des fichiers seront copié dans le premier dossier, 30% des fichiers seront copié dans le deuxième dossier et 50% des fichiers seront copiés dans le troisième dossier.
End of explanation
"""
def from_folder_to_folders (original_folder_path, percentageFolder1, percentageFolder2, percentageFolder3):
list_fichiers_dans_dossier = listdir(orginal_folder_path)
files = [f for f in list_fichiers_dans_dossier if isfile(join(original_folder_path,f))]
"""
Ces instruction sont équivalentes à la création de la liste files qui est au dessus
for f in list_fichier_dans_dossier:
if isfile(join(original_folder_path,f):
files.add(f)
"""
"""
Documentation de random https://docs.python.org/2/library/random.html
"""
"""
Nous mélangeons l'ordre des fichiers dans la liste pour avoir plus de diversité dans chaque dossier.
"""
random.shuffle(files)
nbFilesFolder1 = int((float(percentageFolder1)/100)*len(files))
nbFilesFolder2 = int((float(percentageFolder2)/100)*len(files))
nbFilesFolder3 = int((float(percentageFolder3)/100)*len(files))
if not os.path.exists(join(original_folder_path,"Folder1")):
os.makedirs(join(original_folder_path,"Folder1"))
if not os.path.exists(join(original_folder_path,"Folder2")):
os.makedirs(join(original_folder_path,"Folder2"))
if not os.path.exists(join(original_folder_path,"Folder3")):
os.makedirs(join(original_folder_path,"Folder3"))
"""
enumerate retourne l'index + le contenu de la liste files.
"""
for j,filepath in enumerate(files):
# e.g. sourceFolder = lines_folder/11314_processed_tweet.txt
# "lines_folder/Folder2/"
sourceFolder = os.path.join(original_folder_path,filepath)
if (j > nbFilesFolder1 and j < nbFilesFolder1+nbFilesFolder2):
print "copying the files to folder 2"
if filepath.endswith(".txt"):
shutil.copy2(sourceFolder,join(original_folder_path,"Folder2/"))
elif (j > nbFilesFolder1+nbFilesFolder2 and j < len(files)):
print "copying the files to folder 3"
if filepath.endswith(".txt"):
shutil.copy2(sourceFolder,join(original_folder_path,"Folder3/"))
else:
print "copytin the files to folder 1"
if filepath.endswith(".txt"):
shutil.copy2(sourceFolder, join(original_folder_path,"Folder1/"))
"""
Explanation: Nous vérifions d'abord avec la fonction os.path.exists si le dossier dans lequel nous allons mettre chaque fichier contenant chaque ligne di fichier donné en argument.
Nous initialisons un compteur de lignes avec une variable numérique que nous appelons file_counter. Nous nous servirons de la valeur de cette variable pour donner un nom unique aux fichiers fraîchement créer.
Nous ouvrons ensuite le fichier original_file_path et parcourons ligne par ligne. À chaque ligne nous créerons un nouveau fichier et écrivons la ligne que nous venons de lire dans le fichier.
La fonction retourne le nom du dossier qui contient tous les fichiers que nous venons de créer.
End of explanation
"""
def main():
file_to_files("data/preprocessedP.txt")
#from_folder_to_folders(file_to_files("data/preprocessedP.txt"), 50, 30, 20)
if __name__ == '__main__':
main()
"""
Explanation: Le but de la fonction from_folder_to_folders est de parcourir le dossier retourner par la fonction file_to_files et de copier l'ensemble des fichiers en trois dossiers selon trois proportions données en arguments et représentés par des pourcentages.
D'abord nous créons une liste qui contient tous les fichiers dans le dossier retourné par la fonction file_to_files.
Ensuite et pour chaque fichier dans cette liste nous allons récupéré le chemin relatif du fichier et vérifier s'il s'agit bien d'un fichier texte.
Nous calculons le nombre de fichiers qui doit être mis dans chacun des trois dossiers avec l'équation suivante :
\begin{equation}
proportions_fichiers = \frac{pourcentage}{100} \times nombre_total_des_fichiers
\end{equation}
Une fois que nous avons la liste de fichiers, nous parcourons la liste selon les proportions fournies en arguments et utilisons la fonction copy2qui prend en entrée le chemin relatif de la source et le chemin relatif de la destination vers laquelle le fichier va être copié.
End of explanation
"""
|
nudomarinero/mltier1
|
PanSTARRS_WISE_pre_ml.ipynb
|
gpl-3.0
|
import numpy as np
from astropy.table import Table
from astropy import units as u
from astropy.coordinates import SkyCoord
import pickle
from mltier1 import get_center, get_n_m, estimate_q_m, Field
%pylab inline
field = Field(170.0, 190.0, 45.5, 56.5)
"""
Explanation: PanSTARRS - WISE crossmatch: Pre-configure the ML parameters
In this step we will prepare the auxiliary variables used for the ML
End of explanation
"""
panstarrs_full = Table.read("panstarrs_u2.fits")
wise_full = Table.read("wise_u2.fits")
panstarrs = field.filter_catalogue(
panstarrs_full,
colnames=("raMean", "decMean"))
# Free memory
del panstarrs_full
wise = field.filter_catalogue(
wise_full,
colnames=("raWise", "decWise"))
# Free memory
del wise_full
"""
Explanation: Load the data
Load the catalogues
End of explanation
"""
coords_panstarrs = SkyCoord(panstarrs['raMean'], panstarrs['decMean'], unit=(u.deg, u.deg), frame='icrs')
coords_wise = SkyCoord(wise['raWise'], wise['decWise'], unit=(u.deg, u.deg), frame='icrs')
"""
Explanation: Coordinates
As we will use the coordinates to make a cross-match we to load them
End of explanation
"""
bin_list = np.linspace(12., 30., 1801)
center = get_center(bin_list)
n_m = get_n_m(panstarrs["i"], bin_list, field.area)
"""
Explanation: Compute the ML parameters
Number of sources per magnitude in i-band
End of explanation
"""
plot(center, n_m);
"""
Explanation: Number of sources per magnitude per unit area in the selected region (cumulative distribution). This sets the number of background sources. The units of this are N/(square arcsec) per magnitude.
End of explanation
"""
radius = 5 # arcseconds
q_m = estimate_q_m(panstarrs["i"], bin_list, n_m, coords_wise, coords_panstarrs, radius=5)
plot(center, q_m);
"""
Explanation: Compute real(m) and q(m)
The first step is to crossmatch the catalogues to make an estimation
End of explanation
"""
q0 = 0.62
"""
Explanation: Estimated $Q_0$
End of explanation
"""
pickle.dump([bin_list, center, q0, n_m, q_m], open("pw_params.pckl", 'wb'))
"""
Explanation: Save the parameters
End of explanation
"""
|
BrownDwarf/ApJdataFrames
|
notebooks/Allers2006.ipynb
|
mit
|
%pylab inline
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
"""
Explanation: ApJdataFrames: Allers2006
Title: Young, Low-Mass Brown Dwarfs with Mid-Infrared Excesses
Authors: AKCJ
Data is from this paper:
http://iopscience.iop.org/0004-637X/644/1/364/
End of explanation
"""
names = ['Source Number','RA (J2000.0)','DEC (J2000.0)','I', 'Ierr','J', 'Jerr',
'H', 'Herr','Ks', 'Kserr','[3.6]', '[3.6]err','[4.5]', '[4.5]err',
'[5.8]', '[5.8]err','[8.0]', '[8.0]err']
tbl3 = pd.read_csv("http://iopscience.iop.org/0004-637X/644/1/364/fulltext/64106.tb3.txt",
sep=r'\t|\\pm', names = names)
tbl3
"""
Explanation: Table 3 - Photometry
End of explanation
"""
tbl3.drop(11, axis=0, inplace=True)
"""
Explanation: Drop source 12 because it was shown to be a galaxy.
End of explanation
"""
tbl3.to_csv('../data/Allers2006/tbl3.csv', index=False)
"""
Explanation: %%bash
mkdir ../data/Allers2006
End of explanation
"""
mgs2010 = pd.DataFrame([(1, 11.5), (2,6), (4,2), (5,11), (6,5), (7,6),
(8,3),(9,5.5),(10,3),(11,8), (13,6), (14,8), (15,2),
(16,4), (17,12.5),(18,7), (19,4.5)], columns=["Source Number", "SpT"])
out = pd.merge(tbl3, mgs2010, on="Source Number", how="left")
out.head(2)
out.to_csv("../data/Allers2006/mgs2010_bonus.csv", index=False)
"""
Explanation: Bonus: Get the spectral types from my Cool Stars paper.
I published the spectral types for Katelyn's 2006 sources in this cool stars proceedings.
The irony is that there is no machine readable table.
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
Sessions/Session09/Day4/workbook_globalsignals.ipynb
|
mit
|
n_bins = 8192 ## number of total frequency bins in a FT segment; same as number of time bins in the light curve
dt = 1./16. # time resolution of the output light curve
df = 1. / dt / n_bins
"""
Explanation: Global Signals in Time Series Data
By Abigail Stevens
Problem 1: Timmer and Koenig algorithm
The algorithm outlined in Timmer & Koenig 1995 lets you define the shape of your power spectrum (a power law with some slope, a Lorentzian, a sum of a couple Lorentzians and a power law, etc.) then generate the random phases and amplitudes of the Fourier transform to simulate light curves defined by the power spectral shape. This is a great simulation tool to have in your back pocket (or, "maybe useful someday" github repo).
Define some basic parameters for the power spectrum and resultant light curve
End of explanation
"""
def lorentzian(v, v_0, gamma):
""" Gives a Lorentzian centered on v_0 with a FWHM of gamma """
numerator = gamma / (np.pi * 2.0)
denominator = (v - v_0) ** 2 + (1.0/2.0 * gamma) ** 2
L = numerator / denominator
return L
def powerlaw(v, beta):
"""Gives a powerlaw of (1/v)^-beta """
pl = np.zeros(len(v))
pl[1:] = v[1:] ** (-beta)
pl[0] = np.inf
return pl
"""
Explanation: 1a. Make an array of Fourier frequencies
Yes you can do this with scipy, but the order of frequencies in a T&K power spectrum is different than what you'd get by default from a standard FFT of a light curve.
You want the zero frequency to be in the middle (at index n_bins/2) of the frequency array. The positive frequencies should have two more indices than the negative frequencies, because of the zero frequency and nyquist frequency. You can either do this with np.arange or with special options in fftpack.fftfreq.
1b. Define a Lorentzian function and power law function for the shape of the power spectrum
End of explanation
"""
def gaussian(v, mean, std_dev):
"""
Gives a Gaussian with a mean of mean and a standard deviation of std_dev
FWHM = 2 * np.sqrt(2 * np.log(2))*std_dev
"""
exp_numerator = -(v - mean)**2
exp_denominator = 2 * std_dev**2
G = np.exp(exp_numerator / exp_denominator)
return G
def powerlaw_expdecay(v, beta, alpha):
"""Gives a powerlaw of (1/v)^-beta with an exponential decay e^{-alpha*v} """
pl_exp = np.where(v != 0, (1.0 / v) ** beta * np.exp(-alpha * v), np.inf)
return pl_exp
def broken_powerlaw(v, v_b, beta_1, beta_2):
"""Gives two powerlaws, (1/v)^-beta_1 and (1/v)^-beta_2
that cross over at break frequency v_b."""
c = v_b ** (-beta_1 + beta_2) ## scale factor so that they're equal at the break frequency
pl_1 = v[np.where(v <= v_b)] ** (-beta_1)
pl_2 = c * v[np.where(v > v_b)] ** (-beta_2)
pl = np.append(pl_1, pl_2)
return pl
"""
Explanation: Now the T&K algorithm. I've transcribed the 'recipe' section of the T&K95 paper, which you will convert to lines of code.
1c. Choose a power spectrum $S(\nu)$.
We will use a sum of one Lorentzians (a QPO with a centroid frequency of 0.5 Hz and a FWHM of 0.01 Hz), and a Poisson-noise power law. The QPO should be 100 times larger amplitude than the power-law.
1d. For each Fourier frequency $\nu_i$ draw two gaussian-distributed random numbers, multiply them by $$\sqrt{\frac{1}{2}S(\nu_i)}$$ and use the result as the real and imaginary part of the Fourier transform $F$ of the desired data.
In the case of an even number of data points, for reason of symmetry $F(\nu_{Nyquist})$ is always real. Thus only one gaussian distributed random number has to be drawn.
1e. To obtain a real valued time series, choose the Fourier components for the negative frequencies according to $F(-\nu_i)=F*(\nu_i)$ where the asterisk denotes complex conjugation.
Append to make one fourier transform array. Check that your T&K fourier transform has length n_bins. Again, for this algorithm, the zero Fourier frequency is in the middle of the array, the negative Fourier frequencies are in the first half, and the positive Fourier frequencies are in the second half.
1f. Obtain the time series by backward Fourier transformation of $F(\nu)$ from the frequency domain to the time domain.
Note: I usually use .real after an iFFT to get rid of any lingering 1e-10 imaginary factors.
Congratulations!
1g. Plot the power spectrum of your FT (only the positive frequencies) next to the light curve it makes.
Remember: $$P(\nu_i)=|F(\nu_i)|^2$$
You'll want to change the x scale of your light curve plot to be like 20 seconds in length, and only use the positive Fourier frequencies when plotting the power spectrum.
Yay!
1h. Play around with your new-found simulation powers (haha, it's a pun!)
Make more power spectra with different features -- try at least 5 or 6, and plot each of them next to the corresponding light curve. Try red noise, flicker noise, a few broad Lorentzians at lower frequency, multiple QPOs, a delta function, etc.
Here are some other functions you can use to define shapes of power spectra. This exercise is to help build your intuition of what a time signal looks like in the Fourier domain and vice-versa.
End of explanation
"""
fig, ax = plt.subplots(1,1, figsize=(8,5))
ax.plot(rb_freq, rb_pow, linewidth=2.0)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel(r'Frequency (Hz)', fontproperties=font_prop)
ax.tick_params(axis='x', labelsize=16, bottom=True, top=True,
labelbottom=True, labeltop=False)
ax.tick_params(axis='y', labelsize=16, left=True, right=True,
labelleft=True, labelright=False)
plt.show()
"""
Explanation: 2. More realistic simulation with T&K
Now you're able to simulate the power spectrum of a single segment of a light curve. However, as you learned this morning, we usually use multiple (~50+) segments of a light curve, take the power spectrum of each segment, and average them together.
2a. Turn the code from 1d to 1e into a function make_TK_seg
Make it so that you can give a different random seed to each segment.
2b. Make the Fourier transform for a given power shape (as in Problem 1)
Use a Lorentzian QPO + Poisson noise power shape at a centroid frequency of 0.5 Hz and a full width at half maximum (FWHM) of 0.01 Hz. Make the QPO 100 time stronger than the Poisson noise power-law.
2c. Put make_TK_seg in a loop to do for 50 segments.
Make an array of integers that can be your random gaussian seed for the TK algorithm (otherwise, you run the risk of creating the exact same Fourier transform every time, and that will be boring).
Keep a running average of the power spectrum of each segment (like we did this morning in problem 2).
2d. Compute the error on the average power
The error on the power at index $i$ is
$$ \delta P_i = \frac{P_i}{\sqrt{M}} $$
where M is the number of segments averaged together.
2e. Use the re-binning algorithm described in the morning's workbook to re-bin the power spectrum by a factor of 1.05.
Plot the average power spectrum
Remember to use log scale for the y-axis and probably the x-axis too!
End of explanation
"""
def lorentz_q(v, v_peak, q, rms):
"""
Form of the Lorentzian function defined in terms of
peak frequency v_peak and quality factor q
q = v_peak / fwhm
with the integrated rms of the QPO as the normalizing factor.
e.g. see Pottschmidt et al. 2003, A&A, 407, 1039 for more info
"""
f_res = v_peak / np.sqrt(1.0+(1.0/(4.0*q**2)))
r = rms / np.sqrt(0.5-np.arctan(-2.0*q)/np.pi)
lorentz = ((1/np.pi)*2*r**2*q*f_res) / (f_res**2+(4*q**2*(v-f_res)**2))
return lorentz
"""
Explanation: 2f. Re-do 2b through the plot above but slightly changing the power spectrum shape in each segment.
Maybe you change the centroid frequency of the QPO, or the normalizing factors between the two components, or the slope of the power-law.
Bonus problems:
1. Use a different definition of the Lorentzian (below) to make a power spectrum.
Follow the same procedure. Start off with just one segment. Use the rms as the normalizing factor.
2. Using what you learned about data visualization earlier this week, turn the plots in this notebook (and the QPO one, if you're ambitious) into clear and easy-to-digest, publication-ready plots.
End of explanation
"""
|
melissawm/oceanobiopython
|
Notebooks/Aula_5.ipynb
|
gpl-3.0
|
import numpy as np
A = np.zeros((10,10))
print(A)
"""
Explanation: NumPy
Para lidarmos com matrizes e vetores, e realizar operações matemáticas nesses objetos, usamos a biblioteca NumPy e um formato de dados específico: a numpy-array, ou ndarray, que é uma estrutura de dados homogêneos multidimensional, que é uma tabela de elementos (em geral, números), todos do mesmo tipo, indexados por uma tupla de inteiros positivos. As dimensões são chamadas de eixos, e o número de eixos é o rank. Por exemplo, um vetor tem rank 1; uma matriz (não importa de que tamanho) tem rank 2.
End of explanation
"""
A = np.matrix([[10,3],[1,2],[3,1]])
print(A)
lista = [1,2,3,4]
np.asarray(lista)
dir(A)
print("Matriz: ",A)
print("Diagonal: ", A.diagonal())
"""
Explanation: Atenção: Note que numpy.array não é o mesmo que a classe padrão do Python array.array, que lida apenas com vetores unidimensionais e oferece muito menos funcionalidades.
End of explanation
"""
np.size(A)
np.shape(A)
np.ndim(A)
"""
Explanation: Alguns conceitos são um pouco diferentes no numpy (em relação ao MATLAB). Cuidado:
End of explanation
"""
A = np.array([[1,2],[3,4]])
B = np.array([[5,6],[7,8]])
A*B
"""
Explanation: Com o numpy, é possível efetuar operações vetoriais, similarmente ao que ocorre no MATLAB, mas a sintaxe pode ser diferente. Cuidado com as pegadinhas, por exemplo:
End of explanation
"""
np.dot(A,B)
np.transpose(A)
"""
Explanation: Para efetuar a multiplicação entre matrizes como estamos acostumados, usamos
End of explanation
"""
import math
import numpy
import scipy
print(math.pi, "do módulo math")
print(numpy.pi, "do módulo numpy")
print(scipy.pi, "do módulo scipy")
import math as matematica
matematica.pi
import math as biblioteca
biblioteca.pi
"""
Explanation: Para saber mais:
Quickstart: https://docs.scipy.org/doc/numpy-dev/user/quickstart.html
Diferenças entre Numpy e MATLAB: https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html
Observação: namespaces
Talvez você já tenha percebido que ao executarmos comandos, nos referimos ao módulo em que o comando está definido. Por exemplo, para usar a função transpose do módulo numpy, que importamos como np, usamos o comando
np.transpose(A)
isso significa que a função transpose que estamos utilizando "mora" dentro do módulo numpy.
Exemplo:
End of explanation
"""
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('meus números')
plt.show()
"""
Explanation: matplotlib
A biblioteca matplotlib, e em particular seu módulo matplotlib.pyplot, permites ao usuário fazer gráficos com uma sintaxe parecida com o que é feito no MATLAB.
Cada função executa uma ação em uma figura. Por exemplo, para fazer um gráfico simples, usamos algo do tipo
End of explanation
"""
%matplotlib inline
"""
Explanation: Como queremos que os gráficos fiquem embutidos no notebook, podemos usar a linha
End of explanation
"""
%matplotlib notebook
"""
Explanation: ou
End of explanation
"""
%matplotlib inline
"""
Explanation: Atenção Como vimos em aula, em algumas instalações a opção notebook não funciona corretamente; aconselho sempre que estiverem usando notebooks, usarem
%matplotlib inline
End of explanation
"""
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('meus números')
plt.show()
"""
Explanation: Agora, podemos usar comandos no estilo MATLAB:
End of explanation
"""
plt.plot([1, 2, 3, 4], [1, 4, 9, 16])
plt.show()
"""
Explanation: Observação Os exemplos abaixo foram baseados no tutorial http://matplotlib.org/users/pyplot_tutorial.html
Para desenhar um gráfico, precisamos passar ao matplotlib uma lista de valores para o eixo horizontal, e outra lista de valores (com o mesmo tamanho da primeira) para o eixo vertical; ligando estes pontos, o matplotlib desenha o gráfico. Se passamos apenas uma lista para o matplotlib, ele conclui que esta lista contém os valores a serem colocados no eixo vertical.
Se quisermos escolher valores para o eixo horizontal, fazemos matplotlib.pyplot.plot(x,y):
End of explanation
"""
fig = plt.figure()
plt.plot([1, 2, 3, 4], [1, 4, 9, 16])
"""
Explanation: Perceba que, para que o gráfico esteja na janela correta, precisamos usar uma sintaxe um pouco diferente do que usaríamos no MATLAB:
End of explanation
"""
fig = plt.figure()
plt.plot([1,2,3,4], [1,4,9,16], 'ro')
plt.plot([1,2,3,4],[1,4,9,16], 'g', linewidth=2)
plt.plot([1,3,5,7,9], linestyle="dashed", linewidth=4)
import numpy as np
import matplotlib.pyplot as plt
# t vai de 0 a 5, com subintervalos de 0.2 unidades de comprimento
t = np.arange(0., 5., 0.2)
fig = plt.figure()
# em tracejado vermelho faremos o gráfico de t;
# em quadrados azuis faremos o gráfico de t**2;
# em triângulos verdes faremos o gráfico de t**3
plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')
plt.show()
fig = plt.figure()
plt.plot([1,3,2,7,5,9,4], 'y*', markersize=20)
"""
Explanation: Também podemos selecionar o tipo de traço e a cor, usando exatamente a mesma sintaxe que o MATLAB:
End of explanation
"""
fig = plt.figure()
plt.plot([1,3,2,7,5,9,4], 'y*', markersize=20)
plt.axis([-1,8,0,10])
"""
Explanation: O comando axis permite ajustarmos os limites horizontais e verticais do gráfico:
End of explanation
"""
fig, axes = plt.subplots()
pontos = [1,3,2,7,5,9,4]
axes.plot(pontos, 'y*', markersize=20)
axes.plot(pontos, 'b', linestyle="dotted")
axes.axis([-1,8,0,10])
"""
Explanation: Múltiplos gráficos na mesma janela
No pyplot, assim como no MATLAB, temos um conceito de figura e axes (eixos). Todos os comandos de gráfico se aplicam ao eixo atual. A função gca() retorna uma referência ao eixo atual e gcf() retorna a figura atual.
Por exemplo, podemos usar
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
plt.figure(1)
plt.subplot(211)
plt.plot(t1, f(t1), 'bo', t2, f(t2), 'k')
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show()
"""
Explanation: Agora, vamos criar uma janela com dois subplots:
End of explanation
"""
import matplotlib.pyplot as plt
plt.figure(1) # primeira figura
plt.subplot(211) # primeiro subplot na primeira figura
plt.plot([1, 2, 3])
plt.subplot(212) # segundo subplot na primeira figura
plt.plot([4, 5, 6])
plt.figure(2) # segunda figura
plt.plot([4, 5, 6]) # cria um subplot(111) automaticamente
plt.figure(1) # na figura 1, o subplot(212) ainda é o mais atual
plt.subplot(211) # selecionar o subplot(211) como atual
plt.title('Easy as 1, 2, 3') # atualizar o título do subplot(211)
"""
Explanation: O comando subplot() especifica (linha, coluna, figura) em que figura vai de 1 até linhascolunas (As vírgulas são opcionais se linhascolunas<10; então subplot(211) é o mesmo que subplot(2, 1, 1).
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
fig = plt.figure(1)
n, bins, patches = plt.hist(x, 50, normed=1, facecolor='g', alpha=0.75)
plt.xlabel('Domínio')
plt.ylabel('Probabilidade')
plt.title('Histograma')
plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
plt.axis([40, 160, 0, 0.03])
plt.grid(True)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
fig = plt.figure(1)
n, bins, patches = plt.hist(x, 50, normed=1, facecolor='g', alpha=0.75)
plt.xlabel('Domínio')
plt.ylabel('Probabilidade')
plt.title('Histograma')
plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
plt.axis([40, 160, 0, 0.03])
plt.grid(True)
t = plt.xlabel('Dados', fontsize=14, color='red')
plt.show()
"""
Explanation: Adicionando anotações
O comando text() pode ser utilizado para adicionarmos texto em posições arbitrárias, enquanto que xlabel(), ylabel() e title() são utilizados para adicionar texto em posições pré-definidas:
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
t = np.arange(0.0, 5.0, 0.01)
s = np.cos(2*np.pi*t)
line, = ax.plot(t, s, lw=2)
ax.annotate('máximo local', xy=(2, 1), xytext=(3, 1.5),
arrowprops=dict(facecolor='black', shrink=0.05),
)
plt.ylim(-2,2)
"""
Explanation: (O r antes da string do título é importante pois significa que o Python não deve tentar interpretar o que se segue)
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
# inventar dados no intervalo ]0, 1[
y = np.random.normal(loc=0.5, scale=0.4, size=1000)
y = y[(y > 0) & (y < 1)]
y.sort()
x = np.arange(len(y))
# criar gráficos com diversas escalas
plt.figure()
# linear
plt.subplot(221)
plt.plot(x, y)
plt.yscale('linear')
plt.title('linear')
plt.grid(True)
# log
plt.subplot(222)
plt.plot(x, y)
plt.yscale('log')
plt.title('log')
plt.grid(True)
# log simétrico
plt.subplot(223)
plt.plot(x, y - y.mean())
plt.yscale('symlog', linthreshy=0.05)
plt.title('symlog')
plt.grid(True)
# logit
plt.subplot(224)
plt.plot(x, y)
plt.yscale('logit')
plt.title('logit')
plt.grid(True)
"""
Explanation: Podemos selecionar a escala do gráfico usando
plt.xscale(‘log’)
por exemplo.
End of explanation
"""
|
omimo/xRBM
|
examples/01-RBM-MNIST.ipynb
|
mit
|
import numpy as np
import tensorflow as tf
%matplotlib inline
import matplotlib.pyplot as plt
from IPython import display
#Uncomment the below lines if you didn't install xRBM using pip and want to use the local code instead
#import sys
#sys.path.append('../')
"""
Explanation: Tutorial 1: Training an RBM on MNIST Dataset
In this tutorial, we train a simple RBM on the MNIST dataset and visualize its learned filters.
The Imports
First, we import Tensorflow and numpy packages, as well as the packages we need to visualize the learned filters
End of explanation
"""
import xrbm.models
import xrbm.train
import xrbm.losses
from xrbm.utils.vizutils import *
"""
Explanation: We import the xrbm.models module, which contains the RBM model class, as well as the xrbm.train module, which contains the CD-k approximation algorithm that we use for training our RBM.
End of explanation
"""
from tensorflow.examples.tutorials.mnist import input_data
data_sets = input_data.read_data_sets('MNIST_data', False)
training_data = data_sets.train.images
"""
Explanation: Get the Training Data
We use the MNIST dataset that is provided by the Tensorflow package:
End of explanation
"""
num_vis = training_data[0].shape[0] #=784
num_hid = 200
learning_rate = 0.1
batch_size = 100
training_epochs = 15
"""
Explanation: Set Up the Parameters
The number of the visible units of the RBM equals to the number of the dimensions of the training data. As each image in MNIST is 28x28, the number of total pixels is 784.
We use 200 hidden units, choose a learning rate of 0.1, and set to train the model for 15 epochs.
End of explanation
"""
# Let's reset the tensorflow graph in case we want to rerun the code
tf.reset_default_graph()
rbm = xrbm.models.RBM(num_vis=num_vis, num_hid=num_hid, name='rbm_mnist')
"""
Explanation: Create an RBM model with the parameters
We create an RBM model, and set the number of visible and hidden units. We can also give it a name.
End of explanation
"""
batch_idxs = np.random.permutation(range(len(training_data)))
n_batches = len(batch_idxs) // batch_size
"""
Explanation: We create the mini-batches:
End of explanation
"""
batch_data = tf.placeholder(tf.float32, shape=(None, num_vis))
cdapproximator = xrbm.train.CDApproximator(learning_rate=learning_rate)
train_op = cdapproximator.train(rbm, vis_data=batch_data)
reconstructed_data,_,_,_ = rbm.gibbs_sample_vhv(batch_data)
xentropy_rec_cost = xrbm.losses.cross_entropy(batch_data, reconstructed_data)
"""
Explanation: We create a placeholder for the mini-batch data during training.
We use the CD-k algorithm for training the RBM. For this, we create an instance of the CDApproximator from the xrbm.train module and pass the learning rate to it.
We then define our training op using the CDApproximator's train method, passing the RBM model and the placeholder for the data.
In order to monitor the training process, we calculate the reconstruction cost of the model at each epoch, using the rec_cost_op.
End of explanation
"""
# Create figure first so that we use the same one to draw the filters on during the training
fig = plt.figure(figsize=(12,8))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(training_epochs):
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = batch_idxs[batch_i * batch_size:(batch_i + 1) * batch_size]
# Run the training step
sess.run(train_op, feed_dict={batch_data: training_data[idxs_i]})
reconstruction_cost = sess.run(xentropy_rec_cost, feed_dict={batch_data: training_data})
W = rbm.W.eval().transpose()
filters_grid = create_2d_filters_grid(W, filter_shape=(28,28), grid_size=(10, 20), grid_gap=(1,1))
title = ('Epoch %i / %i | Reconstruction Cost = %f'%
(epoch+1, training_epochs, reconstruction_cost))
plt.title(title)
plt.imshow(filters_grid, cmap='gray')
display.clear_output(wait=True)
display.display(fig)
"""
Explanation: Finally, we are ready to run everything and see the results:
End of explanation
"""
|
nilmtk/nilmtk
|
docs/manual/user_guide/siteonlyapi_tutorial.ipynb
|
apache-2.0
|
from nilmtk.dataset_converters.caxe import convert_caxe
convert_caxe('ac_seconds4.csv')
"""
Explanation: Diasaggregate your Home/Building Mains Meter Data
This notebook demonstrates the use of siteonlyapi - a new NILMTK interface which is a modification of NILMTK's ExperimentAPI. It allows NILMTK users to get their home/buildings energy demands for different potential appliances.
Lets us start with a very simple experiment to demonstrate the use of this API. This experiment shows how the user can convert their meter data into proper REDD format, and call the API to disaggregate the energy into appliance demands based on the training set.
Convert meter data into proper format
End of explanation
"""
from nilmtk.disaggregate import Hart85
from nilmtk.disaggregate import Mean
from nilmtk.disaggregate import CO
from nilmtk.disaggregate import FHMMExact
"""
Explanation: import different algorithms for disaggregations
End of explanation
"""
experiment1 = {
'power': {'mains': ['active'],'appliance': ['active']},
'sample_rate': 60,
'appliances': ['air conditioner','fridge','washing machine','clothes iron','television'],
'methods': {"CO":CO({}),"FHMM":FHMMExact({'num_of_states':2}),'Mean':Mean({}),'Hart':Hart85({})},
'site_only' : True,
'train': {
'datasets': {
'iAWE': {
'path': './iAWE.h5',
'buildings': {
1: {
'start_time': '2013-07-13',
'end_time': '2013-08-04'
}
}
}
}
},
'test': {
'datasets': {
'CAXE': {
'path': './test.h5',
'buildings': {
1: {
'start_time': '2020-08-12',
'end_time': '2020-08-14'
}
}
}
},
'metrics':['rmse']
}
}
from nilmtk.api import API
import warnings
warnings.filterwarnings("ignore")
api_results_experiment_1 = API(experiment1)
df = (api_results_experiment_1.pred_overall['CO'])
"""
Explanation: Here you may change the start and end dates of your test data set and also, we enter the values for the different parameters in the dictionary. Since we need multiple appliances, we enter the names of all the required appliances in the 'appliances' parameter. Also we mention site_only to be true because we want to disaggregate the site meter data only without any comparison with submeter data.
End of explanation
"""
df
"""
Explanation: Getting Predictions Dataframe
End of explanation
"""
|
MasterRobotica-UVic/Control-and-Actuators
|
proportional_sum_delta.ipynb
|
gpl-3.0
|
lmbda = np.array([0.25 + 1j*0.433, 0.25 - 1j*0.433])
circlePlot(lmbda)
"""
Explanation: Oscillations and complex roots
System:
$8y[n] = -2y[n-2] + 4y[n-1] + 5x[n-1]$
Roots are:
$\lambda = 0.25 \pm j0.433$
End of explanation
"""
def complexSystem(n):
if n == 0: # first initial condition
return 0
elif n == 1: # second initial condition
return 1
else:
return -(2/8)*complexSystem(n-2) + (4/8)*complexSystem(n-1)
N = 30
z = np.linspace(0,N,N)
for i in range(N):
z[i] = complexSystem(i)
plt.figure()
plotSequence(z)
"""
Explanation: Consider initial conditions as:
$\begin{matrix}
y[0] &=& 0\
y[1] &=& 1\
\end{matrix}$
Let's see what the plot gives.
End of explanation
"""
def ProportionalControlSecondOrderCar(kp):
# start with the robot at 3m from beacon
d0 = 1
d1 = 1
# This initializes the sequence
n = 1000
d = np.zeros(n)
for i in range(n):
if i == 0: # first initial condition
d[i] = d0
if i == 1: # second initial condition
d[i] = d1
if i > 1:
d[i] = 2*d[i-1] - (1+0.0001*kp)*d[i-2]
# Plot the sequence
plt.figure()
plotSequence(d)
return
interact(ProportionalControlSecondOrderCar, kp=(0,100,0.1))
def PDControlThirdOrderCar(kp,kd):
# start with the robot at 3m from beacon
d0 = 1
d1 = 1
d2 = 1
# This initializes the sequence
n = 2000
d = np.zeros(n)
for i in range(n):
if i == 0: # first initial condition
d[i] = d0
if i == 1: # second initial condition
d[i] = d1
if i == 2: # third initial condition
d[i] = d2
if i > 2:
d[i] = 2*d[i-1] - (1+0.0001*(kp + kd/0.01))*d[i-2] + (0.0001*kd/0.01)*d[i-3]
# Plot the sequence
plt.figure()
plotSequence(d)
return
interact(PDControlThirdOrderCar, kp=(0,100,0.1), kd=(0,10,0.1))
lmbdacar = np.array([0.99749 + 1j*0.0316, 0.99749 - 1j*0.0316])
circlePlot(lmbdacar)
"""
Explanation: The car with distance measures
End of explanation
"""
|
davidbrough1/pymks
|
notebooks/localization_elasticity_polycrystal_hex_3D.ipynb
|
mit
|
import pymks
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Linear Elasticity in 3D for Polycrystalline Microstructures
Authors: Noah Paulson, Andrew Medford, David Brough
Introduction
This example demonstrates the use of MKS to predict strain fields in a polycrystalline sample. The Generalized Spherical Harmonic (GSH) basis is introduced and used for a material with hexagonal crystal symmetry. The effect of different levels of truncation in the GSH basis functions are examined, as well as the effect of selecting an incorrect crystal symmetry.
Modeling with MKS
Obtaining Data for MKS Calibration and Validation
End of explanation
"""
from pymks_share import DataManager
manager = DataManager('pymks.me.gatech.edu')
X, y = manager.fetch_data('random hexagonal orientations')
print(X.shape)
print(y.shape)
"""
Explanation: To start off we need to obtain data from somewhere. In order to make things easy the pymks_share package is used to import data.
End of explanation
"""
from pymks.tools import draw_microstructure_strain
n = X.shape[1]
center = (n-1) / 2
draw_microstructure_strain(X[0, center, :, :, 0], y[0, center])
"""
Explanation: The X variable contains a set of 3 Bunge Euler angles at each spatial point, corresponding to the crystal orientation. The y variable is the '11' component of the strain tensor as obtained by a finite element simulation ($\epsilon_{xx}$). We can visualize this by plotting a slice of a 3-D microstructure ($\phi_1$ angle only) and its corresponding strain response.
End of explanation
"""
N_cal = 40
X_cal = X[0:N_cal, ...]
X_val = X[N_cal:, ...]
y_cal = y[0:N_cal, ...]
y_val = y[N_cal:, ...]
print(X_cal.shape)
print(X_val.shape)
"""
Explanation: This may not mean much, but trust us that the $\epsilon_{xx}$ field is rather expensive to calculate. In principle we could visualize this in 3 dimensions using a package like mayavi, but for this tutorial we will just look at a single slice down through the center.
In order to ensure that our models are valid, we need to split the data into "calibration" and "validation" sets. The idea here is that we train the model on a subset of N_cal datasets, then test the model on the rest. This is a crude form of "cross validation", and will give us confidence that we have not over-fit the model.
End of explanation
"""
from pymks import MKSLocalizationModel
from pymks.bases import GSHBasis
gsh_hex_basis = GSHBasis(n_states=np.arange(6), domain="hexagonal")
"""
Explanation: We can see that we have 40 calibration sets, and 10 validation sets. Ideally we would have a lot more data to validate the model, but at least the 10 should give us an idea of how transferable the model is.
Next we need to set up the MKS "localization model" which will be used to compute all the parameters we need for the machine to "learn" how the input microstructure field is related to the output strain field. In order to capture the orientation dependence we are going to use a basis set of "generalized spherical harmonics". A quick Google search of "generalized spherical harmonics" will tell you that these are pretty trippy functions (nearly all the results are from technical journals!).
In the GSH basis n_states refers to the set of basis functions we want to work with. In this example we want to use the first 5 basis functions, so we assign a list containing indices 0-5 to n_states (we could alternately pass the integer 5 to n_states and PyMKS would automatically know to use the first 5 basis functions!). If we only wanted the 5th basis function we would simply pass n_states a list with only one entry: n_states=[5].
We also need to specify the symmetry we want (and the symmetric domain) of our basis function. PyMKS makes this very easy; we can simply give domain a string specifying the desired crystal symmetry. For example, passing 'hexagonal' specifies a hexagonal crystal symmetry, while passing 'cubic' specifies cubic symmetry. If we pass "triclinic", or don't define the domain at all the non-symmetrized version of the GSH basis is used.
Calibrating First Order Influence Coefficients
End of explanation
"""
print(gsh_hex_basis.basis_indices)
"""
Explanation: Now we have selected the basis functions, perhaps we want to know more about what we've selected. Let's ask for the l, m and n indices of the GSH basis functions we've selected (Note that this is an advanced feature and may only be useful for the most seasoned materials scientists!).
End of explanation
"""
model = MKSLocalizationModel(basis=gsh_hex_basis)
model.fit(X_cal, y_cal)
"""
Explanation: Now all of the complexity of the GSH basis set will be taken care of by pyMKS from here on out. We just need to fit the model:
End of explanation
"""
from pymks.tools import draw_coeff
coef_ = model.coef_
draw_coeff(np.real(coef_[:,center, :, :]), figsize=(2, 3))
"""
Explanation: and then we can look at the "influence coefficients" which capture the connection between the input and output fields. Notice that there are 6 (complex valued) influence coefficients, because we had 6 "states" in the basis set. In other words, there are 2 fields of influence coefficients for each basis set. Again, we are only looking at a slice through the center in order to avoid the complexities of visualizing 3D data.
End of explanation
"""
draw_coeff(np.imag(coef_[:,center, :, :]), figsize=(2, 3))
"""
Explanation: we also want to plot the imaginary components of the influence coefficients, you can't forget about these when using the GSH basis!
End of explanation
"""
y_predict = model.predict(X_val)
"""
Explanation: We can see that the coefficients for some basis sets have significant values, while others are mostly zero. This means that in principle we could probably describe the system with fewer basis states. We also notice that when there are non-zero components, they are typically centered near zero. This is intuitive, since it tells us that the elastic response of the material is local, as we would expect (and as can be seen in the other elasticity tutorials).
Prediction of Strain Fields for Validation Microstructures
Now we want to use these coefficients to predict the response of the validation set, and ensure that the results are in line with the outputs of the full simulation.
End of explanation
"""
from pymks.tools import draw_strains_compare
draw_strains_compare(y_val[0, center], y_predict[0, center])
"""
Explanation: First let's simply compare slices of the $\epsilon_{xx}$ strain fields for one of our validation microstructures
End of explanation
"""
gsh_hex_basis = GSHBasis(n_states=np.arange(20), domain='hexagonal')
model = MKSLocalizationModel(basis=gsh_hex_basis)
model.fit(X_cal, y_cal)
y_predict = model.predict(X_val)
draw_strains_compare(y_val[0, center], y_predict[0, center])
"""
Explanation: So it looks like the MKS is working pretty well in this case, but it is worth seeing if we can do even better.
Improving the MKS Results
The next thing to try is to include even more basis functions:
End of explanation
"""
from pymks.tools import draw_coeff
coeff = model.coef_
draw_coeff(np.real(coeff[:,center, :, :]), figsize=(4, 5))
draw_coeff(np.imag(coeff[:,center, :, :]), figsize=(4, 5))
"""
Explanation: Clearly now the results are very good. You might ask if we have too few or too many basis functions? First, let's look at the influence coefficients and what is going on.
End of explanation
"""
gsh_hex_basis = GSHBasis(n_states=np.arange(15), domain='hexagonal')
model = MKSLocalizationModel(basis=gsh_hex_basis)
model.fit(X_cal, y_cal)
y_predict = model.predict(X_val)
draw_strains_compare(y_val[0, center], y_predict[0, center])
"""
Explanation: If we look carefully at the influence coefficients we notice that they appear to be identically zero for the 15th basis function and beyond. If we wanted to be thorough we would want to check the influence coefficients for even more basis functions, but for the purposes of this example we can be satisfied that we only need the first 15.
Let's redo the study once more with only the first 15 basis functions and hexagonal symmetry.
End of explanation
"""
gsh_cube_basis = GSHBasis(n_states=np.arange(15), domain='cubic')
model = MKSLocalizationModel(basis=gsh_cube_basis)
model.fit(X_cal, y_cal)
y_predict = model.predict(X_val)
draw_strains_compare(y_val[0, center], y_predict[0, center])
"""
Explanation: As we expected the results look great, probably even better than the previous iteration.
Selection of the Wrong Crystal Symmetry!
Finally let's take a look at what happens when we choose the wrong crystal symmetry.
End of explanation
"""
|
RobinCPC/algorithm-practice
|
Basic/LinkedList.ipynb
|
mit
|
class ListNode:
def __init__(self, val):
self.val = val
self.next = None
# in python next is a reversed word
def reverse(self, head):
prev = None
head = self
while head:
temp = head.next
head.next = prev
prev = head
head = temp
return prev
lk1 = ListNode(5)
lk1.next = ListNode(4)
lk1.next.next = ListNode(3)
print lk1.val
print lk1.next.val
print lk1.next.next.val
rev_lk = lk1.reverse(lk1)
print 'reversed linked list'
print rev_lk.val
print rev_lk.next.val
print rev_lk.next.next.val
"""
Explanation: Singly Linked List
Python
End of explanation
"""
class DListNode:
def __init__(self, val):
self.val = val
self.prev = self.next = None
def reverse(self, head):
curt = None
while head:
curt = head
head = curt.next
curt.next = curt.prev
curt.prev = head
return curt
"""
Explanation: C++
``` c++
inlcude <iostream>
struct ListNode
{
int val;
ListNode next;
ListNode(int val, ListNode next=NULL):val(val), next(next){};
};
ListNode Reverseist(ListNode head)
{
ListNode pre = NULL, tmp;
while(head)
{
tmp = head->next;
head->next = pre;
pre = head;
head = tmp;
}
return pre;
}
```
Dual Linked List
Python
End of explanation
"""
|
jasontlam/snorkel
|
tutorials/intro/Intro_Tutorial_1.ipynb
|
apache-2.0
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
# TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE
# Note that this is necessary for parallel execution amongst other things...
# os.environ['SNORKELDB'] = 'postgres:///snorkel-intro'
from snorkel import SnorkelSession
session = SnorkelSession()
# Here, we just set how many documents we'll process for automatic testing- you can safely ignore this!
n_docs = 500 if 'CI' in os.environ else 2591
"""
Explanation: Intro. to Snorkel: Extracting Spouse Relations from the News
In this tutorial, we will walk through the process of using Snorkel to identify mentions of spouses in a corpus of news articles. The tutorial is broken up into 3 notebooks, each covering a step in the pipeline:
1. Preprocessing
2. Training
3. Evaluation
Part I: Preprocessing
In this notebook, we preprocess several documents using Snorkel utilities, parsing them into a simple hierarchy of component parts of our input data, which we refer to as contexts. We'll also create candidates out of these contexts, which are the objects we want to classify, in this case, possible mentions of spouses. Finally, we'll load some gold labels for evaluation.
All of this preprocessed input data is saved to a database. (Connection strings can be specified by setting the SNORKELDB environment variable. In Snorkel, if no database is specified, then a SQLite database at ./snorkel.db is created by default--so no setup is needed here!
Initializing a SnorkelSession
First, we initialize a SnorkelSession, which manages a connection to a database automatically for us, and will enable us to save intermediate results. If we don't specify any particular database (see commented-out code below), then it will automatically create a SQLite database in the background for us:
End of explanation
"""
from snorkel.parser import TSVDocPreprocessor
doc_preprocessor = TSVDocPreprocessor('data/articles.tsv', max_docs=n_docs)
"""
Explanation: Loading the Corpus
Next, we load and pre-process the corpus of documents.
Configuring a DocPreprocessor
We'll start by defining a TSVDocPreprocessor class to read in the documents, which are stored in a tab-seperated value format as pairs of document names and text.
End of explanation
"""
from snorkel.parser.spacy_parser import Spacy
from snorkel.parser import CorpusParser
corpus_parser = CorpusParser(parser=Spacy())
%time corpus_parser.apply(doc_preprocessor, count=n_docs)
"""
Explanation: Running a CorpusParser
We'll use Spacy, an NLP preprocessing tool, to split our documents into sentences and tokens, and provide named entity annotations.
End of explanation
"""
from snorkel.models import Document, Sentence
print("Documents:", session.query(Document).count())
print("Sentences:", session.query(Sentence).count())
"""
Explanation: We can then use simple database queries (written in the syntax of SQLAlchemy, which Snorkel uses) to check how many documents and sentences were parsed:
End of explanation
"""
from snorkel.models import candidate_subclass
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
"""
Explanation: Generating Candidates
The next step is to extract candidates from our corpus. A Candidate in Snorkel is an object for which we want to make a prediction. In this case, the candidates are pairs of people mentioned in sentences, and our task is to predict which pairs are described as married in the associated text.
Defining a Candidate schema
We now define the schema of the relation mention we want to extract (which is also the schema of the candidates). This must be a subclass of Candidate, and we define it using a helper function. Here we'll define a binary spouse relation mention which connects two Span objects of text. Note that this function will create the table in the database backend if it does not exist:
End of explanation
"""
from snorkel.candidates import Ngrams, CandidateExtractor
from snorkel.matchers import PersonMatcher
ngrams = Ngrams(n_max=7)
person_matcher = PersonMatcher(longest_match_only=True)
cand_extractor = CandidateExtractor(Spouse, [ngrams, ngrams], [person_matcher, person_matcher])
"""
Explanation: Writing a basic CandidateExtractor
Next, we'll write a basic function to extract candidate spouse relation mentions from the corpus. The Spacy parser we used performs named entity recognition for us.
We will extract Candidate objects of the Spouse type by identifying, for each Sentence, all pairs of n-grams (up to 7-grams) that were tagged as people. (An n-gram is a span of text made up of n tokens.) We do this with three objects:
A ContextSpace defines the "space" of all candidates we even potentially consider; in this case we use the Ngrams subclass, and look for all n-grams up to 7 words long
A Matcher heuristically filters the candidates we use. In this case, we just use a pre-defined matcher which looks for all n-grams tagged by Spacy as "PERSON". The keyword argument longest_match_only means that we'll skip n-grams contained in other n-grams.
A CandidateExtractor combines this all together!
End of explanation
"""
from snorkel.models import Document
from util import number_of_people
docs = session.query(Document).order_by(Document.name).all()
train_sents = set()
dev_sents = set()
test_sents = set()
for i, doc in enumerate(docs):
for s in doc.sentences:
if number_of_people(s) <= 5:
if i % 10 == 8:
dev_sents.add(s)
elif i % 10 == 9:
test_sents.add(s)
else:
train_sents.add(s)
"""
Explanation: Next, we'll split up the documents into train, development, and test splits; and collect the associated sentences.
Note that we'll filter out a few sentences that mention more than five people. These lists are unlikely to contain spouses.
End of explanation
"""
%%time
for i, sents in enumerate([train_sents, dev_sents, test_sents]):
cand_extractor.apply(sents, split=i)
print("Number of candidates:", session.query(Spouse).filter(Spouse.split == i).count())
"""
Explanation: Finally, we'll apply the candidate extractor to the three sets of sentences. The results will be persisted in the database backend.
End of explanation
"""
from util import load_external_labels
%time missed = load_external_labels(session, Spouse, annotator_name='gold')
"""
Explanation: Loading Gold Labels
Finally, we'll load gold labels for development and evaluation. Even though Snorkel is designed to create labels for data, we still use gold labels to evaluate the quality of our models. Fortunately, we need far less labeled data to evaluate a model than to train it.
End of explanation
"""
|
deepmind/dm-haiku
|
examples/haiku_lstms.ipynb
|
apache-2.0
|
#@title Full license text
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 DeepMind Technologies Limited. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
!pip install git+https://github.com/deepmind/dm-haiku
import functools
import math
from typing import Tuple, TypeVar
import warnings
import haiku as hk
import jax
import jax.numpy as jnp
import optax
import numpy as np
import pandas as pd
import plotnine as gg
T = TypeVar('T')
Pair = Tuple[T, T]
gg.theme_set(gg.theme_bw())
warnings.filterwarnings('ignore')
"""
Explanation: LSTMs in Haiku
Haiku is a simple neural network library for JAX.
This notebook walks through a simple LSTM in JAX with Haiku.
For first-time Haiku users, we recommend that you first check out out our Quickstart and MNIST example first.
Setup
End of explanation
"""
def sine_seq(
phase: float,
seq_len: int,
samples_per_cycle: int,
) -> Pair[np.ndarray]:
"""Returns x, y in [T, B] tensor."""
t = np.arange(seq_len + 1) * (2 * math.pi / samples_per_cycle)
t = t.reshape([-1, 1]) + phase
sine_t = np.sin(t)
return sine_t[:-1, :], sine_t[1:, :]
def generate_data(
seq_len: int,
train_size: int,
valid_size: int,
) -> Pair[Pair[np.ndarray]]:
phases = np.random.uniform(0., 2 * math.pi, [train_size + valid_size])
all_x, all_y = sine_seq(phases, seq_len, 3 * seq_len / 4)
all_x = np.expand_dims(all_x, -1)
all_y = np.expand_dims(all_y, -1)
train_x = all_x[:, :train_size]
train_y = all_y[:, :train_size]
valid_x = all_x[:, train_size:]
valid_y = all_y[:, train_size:]
return (train_x, train_y), (valid_x, valid_y)
class Dataset:
"""An iterator over a numpy array, revealing batch_size elements at a time."""
def __init__(self, xy: Pair[np.ndarray], batch_size: int):
self._x, self._y = xy
self._batch_size = batch_size
self._length = self._x.shape[1]
self._idx = 0
if self._length % batch_size != 0:
msg = 'dataset size {} must be divisible by batch_size {}.'
raise ValueError(msg.format(self._length, batch_size))
def __next__(self) -> Pair[np.ndarray]:
start = self._idx
end = start + self._batch_size
x, y = self._x[:, start:end], self._y[:, start:end]
if end >= self._length:
end = end % self._length
assert end == 0 # Guaranteed by ctor assertion.
self._idx = end
return x, y
TRAIN_SIZE = 2 ** 14
VALID_SIZE = 128
BATCH_SIZE = 8
SEQ_LEN = 64
train, valid = generate_data(SEQ_LEN, TRAIN_SIZE, VALID_SIZE)
# Plot an observation/target pair.
df = pd.DataFrame({'x': train[0][:, 0, 0], 'y': train[1][:, 0, 0]}).reset_index()
df = pd.melt(df, id_vars=['index'], value_vars=['x', 'y'])
plot = gg.ggplot(df) + gg.aes(x='index', y='value', color='variable') + gg.geom_line()
plot.draw()
train_ds = Dataset(train, BATCH_SIZE)
valid_ds = Dataset(valid, BATCH_SIZE)
del train, valid # Don't leak temporaries.
"""
Explanation: Generating Data
In this notebook, we generate many sine waves (of the same period), and try to predict the next value in the wave based on its previous values.
For simplicity, we generate static-sized datasets and wrap them with an iterator-based API.
End of explanation
"""
def unroll_net(seqs: jnp.ndarray):
"""Unrolls an LSTM over seqs, mapping each output to a scalar."""
# seqs is [T, B, F].
core = hk.LSTM(32)
batch_size = seqs.shape[1]
outs, state = hk.dynamic_unroll(core, seqs, core.initial_state(batch_size))
# We could include this Linear as part of the recurrent core!
# However, it's more efficient on modern accelerators to run the linear once
# over the entire sequence than once per sequence element.
return hk.BatchApply(hk.Linear(1))(outs), state
model = hk.transform(unroll_net)
def train_model(train_ds: Dataset, valid_ds: Dataset) -> hk.Params:
"""Initializes and trains a model on train_ds, returning the final params."""
rng = jax.random.PRNGKey(428)
opt = optax.adam(1e-3)
@jax.jit
def loss(params, x, y):
pred, _ = model.apply(params, None, x)
return jnp.mean(jnp.square(pred - y))
@jax.jit
def update(step, params, opt_state, x, y):
l, grads = jax.value_and_grad(loss)(params, x, y)
grads, opt_state = opt.update(grads, opt_state)
params = optax.apply_updates(params, grads)
return l, params, opt_state
# Initialize state.
sample_x, _ = next(train_ds)
params = model.init(rng, sample_x)
opt_state = opt.init(params)
for step in range(2001):
if step % 100 == 0:
x, y = next(valid_ds)
print("Step {}: valid loss {}".format(step, loss(params, x, y)))
x, y = next(train_ds)
train_loss, params, opt_state = update(step, params, opt_state, x, y)
if step % 100 == 0:
print("Step {}: train loss {}".format(step, train_loss))
return params
trained_params = train_model(train_ds, valid_ds)
"""
Explanation: Training an LSTM
To train the LSTM, we define a Haiku function which unrolls the LSTM over the input sequence, generating predictions for all output values. The LSTM always starts with its initial state at the start of the sequence.
The Haiku function is then transformed into a pure function through hk.transform, and is trained with Adam on an L2 prediction loss.
End of explanation
"""
def plot_samples(truth: np.ndarray, prediction: np.ndarray) -> gg.ggplot:
assert truth.shape == prediction.shape
df = pd.DataFrame({'truth': truth.squeeze(), 'predicted': prediction.squeeze()}).reset_index()
df = pd.melt(df, id_vars=['index'], value_vars=['truth', 'predicted'])
plot = (
gg.ggplot(df)
+ gg.aes(x='index', y='value', color='variable')
+ gg.geom_line()
)
return plot
# Grab a sample from the validation set.
sample_x, _ = next(valid_ds)
sample_x = sample_x[:, :1] # Shrink to batch-size 1.
# Generate a prediction, feeding in ground truth at each point as input.
predicted, _ = model.apply(trained_params, None, sample_x)
plot = plot_samples(sample_x[1:], predicted[:-1])
plot.draw()
del sample_x, predicted
# Typically: the beginning of the predictions are a bit wonky, but the curve
# quickly smoothes out.
"""
Explanation: Sampling
The point of training models is so that they can make predictions! How can we generate predictions with the trained model?
If we're allowed to feed in the ground truth, we can just run the original model's apply function.
End of explanation
"""
def autoregressive_predict(
trained_params: hk.Params,
context: jnp.ndarray,
seq_len: int,
):
"""Given a context, autoregressively generate the rest of a sine wave."""
ar_outs = []
context = jax.device_put(context)
for _ in range(seq_len - context.shape[0]):
full_context = jnp.concatenate([context] + ar_outs)
outs, _ = jax.jit(model.apply)(trained_params, None, full_context)
# Append the newest prediction to ar_outs.
ar_outs.append(outs[-1:])
# Return the final full prediction.
return outs
sample_x, _ = next(valid_ds)
context_length = SEQ_LEN // 8
# Cut the batch-size 1 context from the start of the sequence.
context = sample_x[:context_length, :1]
# We can reuse params we got from training for inference - as long as the
# declaration order is the same.
predicted = autoregressive_predict(trained_params, context, SEQ_LEN)
plot = plot_samples(sample_x[1:, :1], predicted)
plot += gg.geom_vline(xintercept=len(context), linetype='dashed')
plot.draw()
del predicted
"""
Explanation: If we can't feed in the ground truth (because we don't have it), we can also run the model autoregressively.
End of explanation
"""
def fast_autoregressive_predict_fn(context, seq_len):
"""Given a context, autoregressively generate the rest of a sine wave."""
core = hk.LSTM(32)
dense = hk.Linear(1)
state = core.initial_state(context.shape[1])
# Unroll over the context using `hk.dynamic_unroll`.
# As before, we `hk.BatchApply` the Linear for efficiency.
context_outs, state = hk.dynamic_unroll(core, context, state)
context_outs = hk.BatchApply(dense)(context_outs)
# Now, unroll one step at a time using the running recurrent state.
ar_outs = []
x = context_outs[-1]
for _ in range(seq_len - context.shape[0]):
x, state = core(x, state)
x = dense(x)
ar_outs.append(x)
return jnp.concatenate([context_outs, jnp.stack(ar_outs)])
fast_ar_predict = hk.transform(fast_autoregressive_predict_fn)
fast_ar_predict = jax.jit(fast_ar_predict.apply, static_argnums=3)
# Reuse the same context from the previous cell.
predicted = fast_ar_predict(trained_params, None, context, SEQ_LEN)
# The plots should be equivalent!
plot = plot_samples(sample_x[1:, :1], predicted[:-1])
plot += gg.geom_vline(xintercept=len(context), linetype='dashed')
plot
%timeit autoregressive_predict(trained_params, context, SEQ_LEN)
%timeit fast_ar_predict(trained_params, None, context, SEQ_LEN)
"""
Explanation: Sharing parameters with a different function.
Unfortunately, this is a bit slow - we're doing O(N^2) computation for a sequence of length N.
It'd be better if we could do the autoregressive sampling all at once - but we need to write a new Haiku function for that.
We're in luck - if the Haiku module names match, the same parameters can be used for multiple Haiku functions.
This can be achieved through a combination of two techniques:
If we manually give a unique name to a module, we can ensure that the parameters are directed to the right places.
If modules are instantiated in the same order, they'll have the same names in different functions.
Here, we rely on method #2 to create a fast autoregressive prediction.
End of explanation
"""
|
rhenanbartels/hrv
|
notebooks/Heart Rate Variability analyses using RRi series.ipynb
|
bsd-3-clause
|
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (10, 6)
"""
Explanation: Analysis of an RRi series registered during REST condition and RECOVERY from maximal effort exercise
End of explanation
"""
from hrv.io import read_from_text
rri = read_from_text("data/08012805.txt")
"""
Explanation: Reading the RRi file
End of explanation
"""
rri.info()
"""
Explanation: Getting some information about the file
End of explanation
"""
fig, ax = rri.plot()
"""
Explanation: The RRi series has 2380 values and approximately 20 minutes.
Visual Inspection
End of explanation
"""
from hrv.filters import quotient, moving_median
"""
Explanation: This is an RRi series recorded during a maximal effort exercise. The first 180 seconds (3 minutes) the subject is resting, after this period the exercise test started and the workload was incremented each minute until the subject's fatigue. Following the exercise, there is a recovery period of approximately 600s (10 minutes).
Filtering
Looks like there are some noise in the RRi signal. Let's try to filter the time series:
End of explanation
"""
fq_rri = quotient(rri)
fig, ax = fq_rri.plot()
"""
Explanation: Quotient Filter
End of explanation
"""
fmm_rri = moving_median(rri, order=5)
fig, ax = fmm_rri.plot()
"""
Explanation: Moving Median
End of explanation
"""
rest_rri = fq_rri.time_range(start=0, end=180)
fig, ax = rest_rri.plot()
"""
Explanation: Both filters removed the spikes, but seems that the quotient filter preserved the signal, only removing the noise, while the moving_median filtered the whole tachogram. Let's keep the quotient filter results.
Calculate HRV indices during rest
To extract information about the RRi fluctuations during rest, first we need to slice the time series on the first 180 seconds.
End of explanation
"""
from hrv.classical import frequency_domain, time_domain
rest_time_domain = time_domain(rest_rri)
rest_time_domain
"""
Explanation: Time Domain and Frequency Domain during Rest
End of explanation
"""
from hrv.detrend import polynomial_detrend
detrended_rest_rri = polynomial_detrend(rest_rri, degree=3)
fig, ax = detrended_rest_rri.plot()
"""
Explanation: Before extracting Frequency Domain features lets first remove the slow trend from the RRi signal:
End of explanation
"""
detrended_rest_rri.info()
"""
Explanation: Note how the Y-axis is now centered on zero.
End of explanation
"""
rest_freq_domain = frequency_domain(
detrended_rest_rri,
method="welch",
nperseg=64,
noverlap=32,
interp_method="cubic",
window="hanning",
fs=4.0
)
rest_freq_domain
"""
Explanation: Once our rest signal has only 167 points, lets reduce the segment size and the overlap size of Welch's method to 64 and 32, respectively.
End of explanation
"""
recovery_rri = rri.time_range(start=rri.time[-1] - 180, end=rri.time[-1]).reset_time()
fig, ax = recovery_rri.plot()
recovery_rri.info()
"""
Explanation: Comparing the HRV during Rest and at the last three minutes of Recovery
End of explanation
"""
recovery_time_domain = time_domain(recovery_rri)
recovery_time_domain
detrended_recovery_rri = polynomial_detrend(recovery_rri, degree=3)
fig, ax = detrended_recovery_rri.plot()
recovery_freq_domain = frequency_domain(
detrended_recovery_rri,
method="welch",
nperseg=64,
noverlap=32,
interp_method="cubic",
window="hanning",
fs=4.0
)
recovery_freq_domain
def compare_indices(ax, cond_1, cond_2, index_name, title, y_label):
ax.bar([0, 1], [cond_1[index_name], cond_2[index_name]], color=["b", "r"])
ax.set_xticks([0, 1])
ax.set_xticklabels(["Rest", "Recovery"])
ax.set(ylabel=y_label)
ax.set(title=title)
fig, ax = plt.subplots(2, 2)
fig.set_size_inches(15, 12)
compare_indices(
ax[0][0],
rest_time_domain,
recovery_time_domain,
"rmssd",
title="Time Domain",
y_label="RMSSD (ms)"
)
compare_indices(
ax[0][1],
rest_time_domain,
recovery_time_domain,
"pnn50",
title="Time Domain",
y_label="pNN50 (%)"
)
compare_indices(
ax[1][0],
rest_freq_domain,
recovery_freq_domain,
"hf",
title="Frequency Domain",
y_label="HF (ms²)"
)
compare_indices(
ax[1][1],
rest_freq_domain,
recovery_freq_domain,
"lf",
title="Frequency Domain",
y_label="LF (ms²)"
)
"""
Explanation: Time Domain and Frequency Domain during Recovery
End of explanation
"""
peak_exercise_rri = rri.time_range(start=400, end=600)
fig, ax = peak_exercise_rri.plot()
"""
Explanation: The figure above depicts the comparison between RMSSD, pNN50, HF, and LF extracted on the Rest (blue) and Recovery (red) periods. The reduced values of these indices in the recovery period might indicate that the vagal activity is, at least, partially suppressed after the maximal effort exercise.
The reduced LF (ms²) measure indicates that the RRi series at the recovery period has fewer overall fluctuations compared to the Rest period.
Methods of assessment of the post-exercise cardiac autonomic recovery: A methodological review
Absence of parasympathetic reactivation after maximal exercise
Analysis of the dynamics of non-stationary RRi series
One of the reasons for selecting the Rest and the Recovery periods is due to its stationary behavior. Classical HRV indices expect that the statistical properties of the RRi signal are stable as a function of time. Therefore, extracting classical indices (Time and Frequency domain) in non-stationary segments might bring misleading results.
Let's take a look at the RRi series at the peak of the maximal effort exercise:
End of explanation
"""
from hrv.nonstationary import time_varying
tv_results = time_varying(fq_rri, seg_size=30, overlap=0)
fig, ax = tv_results.plot(index="rmssd", marker="o", color="k")
"""
Explanation: As shown in the above picture, the RRi series during exercise is non-stationary and for this reason, classical analyses are not recommended.
To overcome the non-stationary behavior and also extract information about the dynamics of the HRV in experiments involving physical exercise, Tilt maneuver it is possible to use time-varying method, which consists of splitting the RRi signal into smaller segments (ex: 30s) and calculate the time domain indices of each adjacent segment.
There are also Frequency domain analyses in adjacent smaller segments of the RRi signal like Short Time Fourier Transform, but it is still a work in progress in the hrv module.
End of explanation
"""
|
darioizzo/d-CGP
|
doc/sphinx/notebooks/real_world2.ipynb
|
gpl-3.0
|
# Some necessary imports.
import dcgpy
import pygmo as pg
import numpy as np
# Sympy is nice to have for basic symbolic manipulation.
from sympy import init_printing
from sympy.parsing.sympy_parser import *
init_printing()
# Fundamental for plotting.
from matplotlib import pyplot as plt
%matplotlib inline
"""
Explanation: A real-world case (Physics: dynamics)
In this tutorial we will be using data from a real-world case. The data comes from a piecewise continuous function representing the gravitational interaction between two swarm of particles. It is of interest to represent such an interaction with a one only continuous function, albeit introducing some error. If succesfull, this would allow to have some analytical insight on the qualitative stability of the resulting orbits, as well as to make use of methods requiring high order continuity to study the resulting dynamical system.
The equation is (derived from a work by Francesco Biscani):
$$
a(x) = \left{
\begin{array}{ll}
\frac{x^3 - 18x+32}{32} & x < 2 \
\frac{1}{x^2} & x \ge 2
\end{array}
\right.
$$
It is important, on this problem, to respect the asymptotic behaviour of the acceleration so that $\lim_{x\rightarrow \infty}a(x) = \frac 1{x^2}$.
End of explanation
"""
X = np.linspace(0,15, 100)
Y = X * ((X**3) - 18 * X + 32) / 32
Y[X>2] = 1. / X[X>2]**2
X = np.reshape(X, (100,1))
Y = np.reshape(Y, (100,1))
# And we plot them as to visualize the problem.
_ = plt.plot(X, Y, '.')
_ = plt.title('Acceleration')
_ = plt.xlabel('a')
_ = plt.ylabel('f')
"""
Explanation: 1 - The raw data
Since the asymptotic behaviour is important, we place the majority of points on the $x>2$ area. Note that the definition of the grid (i.e. how many points and where) is fundamental and has great impact on the search performances.
End of explanation
"""
# We define our kernel set, that is the mathematical operators we will
# want our final model to possibly contain. What to choose in here is left
# to the competence and knowledge of the user. For this particular application we want to mainly look into rational
#functions. Note we do not include the difference as that can be obtained via negative constants
ss = dcgpy.kernel_set_double(["sum", "mul","pdiv"])
# We instantiate the symbolic regression optimization problem (note: many important options are here not
# specified and thus set to their default values).
# Note that we allow for three constants in the final expression
udp = dcgpy.symbolic_regression(points = X, labels = Y, kernels=ss(), n_eph=3, rows =1, cols=20, levels_back=21, multi_objective=True)
print(udp)
"""
Explanation: 2 - The symbolic regression problem
End of explanation
"""
# We instantiate here the evolutionary strategy we want to use to search for models.
# In this case we use a multiple objective memetic algorithm.
uda = dcgpy.momes4cgp(gen = 3000, max_mut = 4)
"""
Explanation: 4 - The search algorithm
End of explanation
"""
prob = pg.problem(udp)
algo = pg.algorithm(uda)
# Note that the screen output will happen on the terminal, not on your Jupyter notebook.
# It can be recovered afterwards from the log.
algo.set_verbosity(10)
pop = pg.population(prob, 20)
pop = algo.evolve(pop)
# This extract the population individual with lowest loss
idx = np.argmin(pop.get_f(), axis=0)[0]
print("Best loss (MSE) found is: ", pop.get_f()[idx][0])
"""
Explanation: 5 - The search
End of explanation
"""
pop.get_f()
# Lets have a look to the symbolic representation of our model (using sympy)
parse_expr(udp.prettier(pop.get_x()[idx]))
# And lets see what our model actually predicts on the inputs
Y_pred = udp.predict(X, pop.get_x()[idx])
# Lets comapre to the data
_ = plt.plot(X, Y_pred, 'r.')
_ = plt.plot(X, Y, '.', alpha=0.2)
_ = plt.title('measurements')
_ = plt.xlabel('unknown')
_ = plt.ylabel('temperature in unknown units')
print("Values for the constants: ", pop.get_x()[idx][:3])
"""
Explanation: 6 - Inspecting the solution
End of explanation
"""
|
nproctor/phys202-2015-work
|
assignments/midterm/InteractEx06.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
"""
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
"""
Image('fermidist.png')
"""
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
"""
def fermidist(energy, mu, kT):
"""Compute the Fermi distribution at energy, mu and kT."""
a = np.exp((energy - mu)/kT) +1
answer = 1/a
return answer
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
"""
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
\begin{equation}
F(\epsilon) = \frac{1}{e^{(\epsilon-\mu)/kT} + 1}
\end{equation}
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
"""
def plot_fermidist(mu, kT):
energy = np.linspace(0,10.0, 40)
plt.plot(energy, fermidist(energy, mu, kT))
plt.xlabel("Energy")
plt.ylabel("Fermi")
plt.title("Fermi Distribution")
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
"""
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
"""
interact(plot_fermidist, mu=(0.0,5.0), kT=(0.1,10.0));
"""
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation
"""
|
Danghor/Formal-Languages
|
Ply/Look-Ahead.ipynb
|
gpl-2.0
|
import ply.lex as lex
tokens = [ 'USELESS' ]
literals = ['U', 'V', 'W', 'X']
def t_USELESS(t):
r'This will never be used.'
__file__ = 'main'
lexer = lex.lex()
"""
Explanation: Dealing with Lookahead Conflicts
This notebook discusses conflicts that have their origin in insufficient looakahead.
We will discuss the following grammar:
```
a : b 'U' 'V'
| c 'U' 'W'
b : 'X'
c : 'X'
```
Specification of the Scanner
We implement a minimal scanner. Below we need to declare. The reason is that Ply only works when the list
tokens is defined and contains at least one token.
End of explanation
"""
import ply.yacc as yacc
"""
Explanation: Specification of the Parser
End of explanation
"""
start = 'a'
"""
Explanation: The start variable of our grammar is expr, but we dont't have to specify that. The default
start variable is the first vvariable that is defined.
End of explanation
"""
def p_a(p):
"""
a : b 'U' 'V'
| c 'U' 'W'
b : 'X'
c : 'X'
"""
pass
def p_error(p):
if p:
print(f'Syntax error at {p.value}.')
else:
print('Syntax error at end of input.')
"""
Explanation: We can specify multiple expressions in a single rule. In this case, we have used the passstatement
as we just want to generate some conflicts.
End of explanation
"""
parser = yacc.yacc(write_tables=False, debug=True)
"""
Explanation: Setting the optional argument write_tables to False <B style="color:red">is required</B> to prevent an obscure bug where the parser generator tries to read an empty parse table.
End of explanation
"""
!cat parser.out
"""
Explanation: Let's look at the action table that is generated. Conflicts are always resolved in favour of shifting.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/ec-earth-consortium/cmip6/models/sandbox-3/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-3', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:00
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
petrs/ECTester
|
util/plot_dh.ipynb
|
mit
|
%matplotlib notebook
import numpy as np
from scipy.stats import describe
from scipy.stats import norm as norm_dist
from scipy.stats.mstats import mquantiles
from math import log, sqrt
import matplotlib.pyplot as plt
from matplotlib import ticker, colors, gridspec
from copy import deepcopy
from utils import plot_hist, moving_average, hw, time_scale, hist_size_func
from binascii import unhexlify
from IPython.display import display, HTML
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import tabulate
"""
Explanation: Analysis of key agreement data
End of explanation
"""
# File name with output from ECTesterReader or ECTesterStandalone ECDH.
fname = "filename.csv"
# The time unit used in displaying the plots. One of "milli", "micro", "nano".
# WARNING: Using nano might lead to very large plots/histograms and to the
# notebook to freeze or run out of memory, as well as bad visualization
# quality, due to noise and low density.
time_unit = "milli"
# A number which will be used to divide the time into sub-units, e.g. for 5, time will be in fifths of units
scaling_factor = 1
# The amount of entries skipped from the beginning of the file, as they are usually outliers.
skip_first = 10
# Whether to plot things in logarithmic scale or not.
log_scale = False
# Whether to trim the time data outside the 1 - 99 percentile range (adjust below). Quite useful.
trim = True
# How much to trim? Either a number in [0,1] signifying a quantile, or an absolute value signifying a threshold
trim_low = 0.01
trim_high = 0.99
# Graphical (matplotlib) style name
style = "ggplot"
# Color map to use, and what color to assign to "bad" values (necessary for log_scale)
color_map = plt.cm.viridis
color_map_bad = "black"
# What function to use to calculate number of histogram bins of time
# one of "sqrt", "sturges", "rice", "scott" and "fd" or a number specifying the number of bins
hist_size = "sturges"
"""
Explanation: Settings
Enter your input below.
End of explanation
"""
# Setup plot style
plt.style.use(style)
cmap = deepcopy(color_map)
cmap.set_bad(color_map_bad)
# Normalization, linear or log.
if log_scale:
norm = colors.LogNorm()
else:
norm = colors.Normalize()
# Read the header line.
with open(fname, "r") as f:
header = f.readline()
header_names = header.split(";")
if len(header_names) != 5:
print("Bad data?")
exit(1)
# Load the data
hx = lambda x: int(x, 16)
data = np.genfromtxt(fname, delimiter=";", skip_header=1, converters={2: unhexlify, 3: hx, 4: hx},
dtype=np.dtype([("index", "u4"), ("time", "u4"), ("pub", "O"), ("priv", "O"), ("secret", "O")]))
# Skip first (outliers?)
data = data[skip_first:]
# Setup the data
orig_time_unit = header_names[1].split("[")[1][:-1]
time_disp_unit = time_scale(data["time"], orig_time_unit, time_unit, scaling_factor)
# Trim times
quant_low_bound = trim_low if 0 <= trim_low <= 1 else 0.01
quant_high_bound = trim_high if 0 <= trim_high <= 1 else 0.95
quantiles = mquantiles(data["time"], prob=(quant_low_bound, 0.25, 0.5, 0.75, quant_high_bound))
if trim:
low_bound = quantiles[0] if 0 <= trim_low <= 1 else trim_low
high_bound = quantiles[4] if 0 <= trim_high <= 1 else trim_high
data_trimmed = data[np.logical_and(data["time"] >= low_bound,
data["time"] <= high_bound)]
quantiles_trim = mquantiles(data_trimmed["time"], prob=(quant_low_bound, 0.25, 0.5, 0.75, quant_high_bound))
else:
low_bound = None
high_bound = None
data_trimmed = data
quantiles_trim = quantiles_gen
description = describe(data["time"])
description_trim = describe(data_trimmed["time"])
max_time = description.minmax[1]
min_time = description.minmax[0]
bit_size = len(bin(max(data["priv"]))) - 2
byte_size = (bit_size + 7) // 8
bit_size = byte_size * 8
hist_size_time = hist_size_func(hist_size)(description.nobs, min_time, max_time, description.variance, quantiles[1], quantiles[3])
hist_size_time_trim = hist_size_func(hist_size)(description_trim.nobs, description_trim.minmax[0], description_trim.minmax[1], description_trim.variance, quantiles_trim[1], quantiles_trim[3])
if hist_size_time < 30:
hist_size_time = max_time - min_time
if hist_size_time_trim < 30:
hist_size_time_trim = description_trim.minmax[1] - description_trim.minmax[0]
"""
Explanation: Data processing
End of explanation
"""
display("Raw")
desc = [("N", "min, max", "mean", "variance", "skewness", "kurtosis"),
description]
display(HTML(tabulate.tabulate(desc, tablefmt="html")))
display("Trimmed")
desc = [("N", "min, max", "mean", "variance", "skewness", "kurtosis"),
description_trim]
display(HTML(tabulate.tabulate(desc, tablefmt="html")))
"""
Explanation: Analysis
Summary
End of explanation
"""
tbl = [(quant_low_bound, "0.25", "0.5", "0.75", quant_high_bound),
list(map(lambda x: "{} {}".format(x, time_disp_unit), quantiles))]
display(HTML(tabulate.tabulate(tbl, tablefmt="html")))
"""
Explanation: Selected quantiles
End of explanation
"""
display("Bitsize: {}".format(bit_size))
display("Histogram time bins: {}".format(hist_size_time))
display("Histogram time bins(trimmed): {}".format(hist_size_time_trim))
"""
Explanation: Info
End of explanation
"""
fig_private = plt.figure(figsize=(10.5, 8), dpi=90)
axe_private = fig_private.add_subplot(1, 1, 1, title="Private key MSB vs key agreement time")
priv_msb = np.array(list(map(lambda x: x >> (bit_size - 8), data_trimmed["priv"])), dtype=np.dtype("u1"))
max_msb = max(priv_msb)
min_msb = min(priv_msb)
heatmap, xedges, yedges = np.histogram2d(priv_msb, data_trimmed["time"],
bins=[max_msb - min_msb + 1, hist_size_time_trim])
extent = [min_msb, max_msb, yedges[0], yedges[-1]]
im = axe_private.imshow(heatmap.T, extent=extent, aspect="auto", cmap=cmap, origin="low",
interpolation="nearest", norm=norm)
axe_private.set_xlabel("private key MSB value")
axe_private.set_ylabel("key agreement time ({})".format(time_disp_unit))
fig_private.colorbar(im, ax=axe_private)
fig_private.tight_layout()
del priv_msb
"""
Explanation: Plots
Private key MSB vs time heatmap
The heatmap should show uncorrelated variables.
End of explanation
"""
fig_priv_hist = plt.figure(figsize=(10.5, 12), dpi=90)
gs = gridspec.GridSpec(2, 1, height_ratios=[2.5, 1])
axe_priv_hist = fig_priv_hist.add_subplot(gs[0], title="Private key Hamming weight vs key agreement time")
axe_priv_hist_hw = fig_priv_hist.add_subplot(gs[1], sharex=axe_priv_hist, title="Private key Hamming weight")
priv_hw = np.array(list(map(hw, data_trimmed["priv"])), dtype=np.dtype("u2"))
h, xe, ye = np.histogram2d(priv_hw, data_trimmed["time"], bins=[max(priv_hw) - min(priv_hw), hist_size_time_trim])
im = axe_priv_hist.imshow(h.T, origin="low", cmap=cmap, aspect="auto", extent=[xe[0], xe[-1], ye[0], ye[-1]], norm=norm)
axe_priv_hist.axvline(x=bit_size//2, alpha=0.7, linestyle="dotted", color="white", label=str(bit_size//2) + " bits")
axe_priv_hist.set_xlabel("private key Hamming weight")
axe_priv_hist.set_ylabel("key agreement time ({})".format(time_disp_unit))
axe_priv_hist.legend(loc="best")
plot_hist(axe_priv_hist_hw, priv_hw, "private key Hamming weight", log_scale, None)
param = norm_dist.fit(priv_hw)
pdf_range = np.arange(min(priv_hw), max(priv_hw))
norm_pdf = norm_dist.pdf(pdf_range, *param[:-2], loc=param[-2], scale=param[-1]) * description_trim.nobs
axe_priv_hist_hw.plot(pdf_range, norm_pdf, label="fitted normal distribution")
axe_priv_hist_hw.legend(loc="best")
fig_priv_hist.tight_layout()
fig_priv_hist.colorbar(im, ax=[axe_priv_hist, axe_priv_hist_hw])
display(HTML("<b>Private key Hamming weight fitted with normal distribution:</b>"))
display(HTML(tabulate.tabulate([("Mean", "Variance"), param], tablefmt="html")))
del priv_hw
"""
Explanation: Private key Hamming Weight vs time heatmap
The heatmap should show uncorrelated variables.
Also contains a private key Hamming Weight histogram, which should be binomially distributed.
End of explanation
"""
fig_ka_hist = plt.figure(figsize=(10.5, 8), dpi=90)
axe_hist_full = fig_ka_hist.add_subplot(2, 1, 1)
axe_hist_trim = fig_ka_hist.add_subplot(2, 1, 2)
plot_hist(axe_hist_full, data["time"], "key agreement time ({})".format(time_disp_unit), log_scale, hist_size_time);
plot_hist(axe_hist_trim, data_trimmed["time"], "key agreement time ({})".format(time_disp_unit), log_scale, hist_size_time_trim);
fig_ka_hist.tight_layout()
"""
Explanation: Key agreement time histogram
End of explanation
"""
fig_avg = plt.figure(figsize=(10.5, 7), dpi=90)
axe_avg = fig_avg.add_subplot(1, 1, 1, title="Moving average of key agreement time")
avg_100 = moving_average(data["time"], 100)
avg_1000 = moving_average(data["time"], 1000)
axe_avg.plot(avg_100, label="window = 100")
axe_avg.plot(avg_1000, label="window = 1000")
if low_bound is not None:
axe_avg.axhline(y=low_bound, alpha=0.7, linestyle="dotted", color="green", label="Low trim bound = {}".format(low_bound))
if high_bound is not None:
axe_avg.axhline(y=high_bound, alpha=0.7, linestyle="dotted", color="orange", label="Hight trim bound = {}".format(high_bound))
axe_avg.set_ylabel("key agreement time ({})".format(time_disp_unit))
axe_avg.set_xlabel("index")
axe_avg.legend(loc="best")
fig_avg.tight_layout()
del avg_100, avg_1000
"""
Explanation: Moving averages of key agreement time
End of explanation
"""
fig_priv_hists = plt.figure(figsize=(10.5, 8), dpi=90)
priv_msb = np.array(list(map(lambda x: x >> (bit_size - 8), data["priv"])), dtype=np.dtype("u1"))
priv_lsb = np.array(list(map(lambda x: x & 0xff, data["priv"])), dtype=np.dtype("u1"))
axe_msb_s_hist = fig_priv_hists.add_subplot(2, 1, 1, title="Private key MSB")
axe_lsb_s_hist = fig_priv_hists.add_subplot(2, 1, 2, title="Private key LSB")
msb_h = plot_hist(axe_msb_s_hist, priv_msb, "private key MSB", log_scale, False, False)
lsb_h = plot_hist(axe_lsb_s_hist, priv_lsb, "private key LSB", log_scale, False, False)
fig_priv_hists.tight_layout()
del priv_msb, priv_lsb
"""
Explanation: Private key MSB and LSB histograms
Expected to be uniform over [0, 255].
End of explanation
"""
fig_bl = plt.figure(figsize=(10.5, 12), dpi=90)
gs = gridspec.GridSpec(2, 1, height_ratios=[2.5, 1])
axe_bl_heat = fig_bl.add_subplot(gs[0], title="Private key bit length vs keygen time")
axe_bl_hist = fig_bl.add_subplot(gs[1], sharex=axe_bl_heat, title="Private key bit length")
bl_data = np.array(list(map(lambda x: x.bit_length(), data_trimmed["priv"])), dtype=np.dtype("u2"))
h, xe, ye = np.histogram2d(bl_data, data_trimmed["time"], bins=[max(bl_data) - min(bl_data), hist_size_time_trim])
im = axe_bl_heat.imshow(h.T, origin="low", cmap=cmap, aspect="auto", extent=[xe[0], xe[-1], ye[0], ye[-1]], norm=norm)
axe_bl_heat.set_xlabel("private key bit length")
axe_bl_heat.set_ylabel("key agreement time ({})".format(time_disp_unit))
plot_hist(axe_bl_hist, bl_data, "Private key bit length", log_scale, align="right")
fig_bl.tight_layout()
fig_bl.colorbar(im, ax=[axe_bl_heat, axe_bl_hist])
del bl_data
"""
Explanation: Private key bit length vs time heatmap
Also contains private key bit length histogram, which is expected to be axis flipped geometric distribution with $p = \frac{1}{2}$ peaking at the bit size of the order of the curve.
End of explanation
"""
fig_bl_time = plt.figure(figsize=(10.5, 5), dpi=90)
axe_bl_time = fig_bl_time.add_subplot(111)
axe_bl_time.set_autoscalex_on(False)
def f(center, width):
lower_bnd = center - width/2
upper_bnd = center + width/2
values = data_trimmed[np.logical_and(data_trimmed["time"] <= upper_bnd,
data_trimmed["time"] >= lower_bnd)]
axe_bl_time.clear()
axe_bl_time.set_title("Private key bit length, given key agreement time $\in ({}, {})$ {}".format(int(lower_bnd), int(upper_bnd), time_disp_unit))
bl_data = np.array(list(map(lambda x: x.bit_length(), values["priv"])), dtype=np.dtype("u2"))
plot_hist(axe_bl_time, bl_data, "private key bit length", bins=11, range=(bit_size-10, bit_size+1), align="left")
axe_bl_time.set_xlim((bit_size-10, bit_size))
fig_bl_time.tight_layout()
center_w = widgets.IntSlider(min=min(data_trimmed["time"]),
max=max(data_trimmed["time"]),
step=1,
value=description_trim.mean,
continuous_update=False,
description="center {}".format(time_disp_unit))
width_w = widgets.IntSlider(min=1, max=100, continuous_update=False,
description="width {}".format(time_disp_unit))
w = interactive(f, center=center_w,
width=width_w)
display(w)
"""
Explanation: Private key bit length histogram given time
Interactively shows the histogram of private key bit length given a selected time range centered around center of width width. Ideally, the means of these conditional distributions are equal, while the variances can vary.
End of explanation
"""
p_str = input("The prime specifying the finite field:")
p = int(p_str, 16) if p_str.startswith("0x") else int(p_str)
r_str = input("The order of the curve:")
r = int(r_str, 16) if r_str.startswith("0x") else int(r_str)
"""
Explanation: Validation
Perform some tests on the produced data and compare to expected results.
This requires some information about the used curve, enter it below.
End of explanation
"""
max_priv = max(data["priv"])
un = len(np.unique(data["priv"])) != 1
if un:
print("Private keys are smaller than order:\t\t\t" + str(max_priv < r))
print("Private keys are larger than prime(if order > prime):\t" + str(r <= p or max_priv > p))
print("Private keys reach full bit length of order:\t\t" + str(max_priv.bit_length() == r.bit_length()))
if un:
print("Private key bit length (min, max):" + str(min(data["priv"]).bit_length()) + ", " + str(max(data["priv"]).bit_length()))
"""
Explanation: All of the following tests should pass (e.g. be true), given a large enough sample.
End of explanation
"""
|
scoyote/RHealthDataImport
|
AllValues.ipynb
|
mit
|
import xml.etree.ElementTree as et
import pandas as pd
import numpy as np
from datetime import *
import matplotlib.pyplot as plt
import re
import os.path
import zipfile
import pytz
%matplotlib inline
plt.rcParams['figure.figsize'] = 16, 8
"""
Explanation: Download, Parse and Interrogate Apple Health Export Data
The first part of this program is all about getting the Apple Health export and putting it into an analyzable format. At that point it can be analysed anywhere. The second part of this program is concerned with using SAS Scripting Wrapper for Analytics Transfer (SWAT) Python library to transfer the data to SAS Viya, and analyze it there. The SWAT package provides native python language access to the SAS Viya codebase.
https://github.com/sassoftware/python-swat
This file was created from a desire to get my hands on data collected by Apple Health, notably heart rate information collected by Apple Watch. For this to work, this file needs to be in a location accessible to Python code. A little bit of searching told me that iCloud file access is problematic and that there were already a number of ways of doing this with the Google API if the file was saved to Google Drive. I chose PyDrive. So for the end to end program to work with little user intervention, you will need to sign up for Google Drive, set up an application in the Google API and install Google Drive app to your iPhone.
This may sound involved, and it is not necessary if you simply email the export file to yourself and copy it to a filesystem that Python can see. If you choose to do that, all of the Google Drive portion can be removed. I like the Google Drive process though as it enables a minimal manual work scenario.
This version requires the user to grant Google access, requiring some additional clicks, but it is not too much. I think it is possible to automate this to run without user intervention as well using security files.
The first step to enabling this process is exporting the data from Apple Health. As of this writing, open Apple Health and click on your user icon or photo. Near the bottom of the next page in the app will be a button or link called Export Health Data. Clicking on this will generate a xml file, zipped up. THe next dialog will ask you where you want to save it. Options are to email, save to iCloud, message etc... Select Google Drive. Google Drive allows multiple files with the same name and this is accounted for by this program.
End of explanation
"""
# Authenticate into Google Drive
from pydrive.auth import GoogleAuth
gauth = GoogleAuth()
gauth.LocalWebserverAuth()
"""
Explanation: Authenticate with Google
This will open a browser to let you beging the process of authentication with an existing Google Drive account. This process will be separate from Python. For this to work, you will need to set up a Other Authentication OAuth credential at https://console.developers.google.com/apis/credentials, save the secret file in your root directory and a few other things that are detailed at https://pythonhosted.org/PyDrive/. The PyDrive instructions also show you how to set up your Google application. There are other methods for accessing the Google API from python, but this one seems pretty nice.
The first time through the process, regular sign in and two factor authentication is required (if you require two factor auth) but after that it is just a process of telling Google that it is ok for your Google application to access Drive.
End of explanation
"""
from pydrive.drive import GoogleDrive
drive = GoogleDrive(gauth)
file_list = drive.ListFile({'q': "'root' in parents and trashed=false"}).GetList()
# Step through the file list and find the most current export.zip file id, then use
# that later to download the file to the local machine.
# This may look a little old school, but these file lists will never be massive and
# it is readable and easy one pass way to get the most current file using the
# least (or low) amount of resouces
selection_dt = datetime.strptime("2000-01-01T01:01:01.001Z","%Y-%m-%dT%H:%M:%S.%fZ")
print("Matching Files")
for file1 in file_list:
if re.search("^export-*\d*.zip",file1['title']):
dt = datetime.strptime(file1['createdDate'],"%Y-%m-%dT%H:%M:%S.%fZ")
if dt > selection_dt:
selection_id = file1['id']
selection_dt = dt
print(' title: %s, id: %s createDate: %s' % (file1['title'], file1['id'], file1['createdDate']))
if not os.path.exists('healthextract'):
os.mkdir('healthextract')
"""
Explanation: Download the most recent Apple Health export file
Now that we are authenticated into Google Drive, use PyDrive to access the API and get to files stored.
Google Drive allows multiple files with the same name, but it indexes them with the ID to keep them separate.
In this block, we make one pass of the file list where the file name is called export.zip, and save the row that corresponds with the most recent date. We will use that file id later to download the correct file that corresponds with the most recent date. Apple Health export names the file export.zip, and at the time this was written, there is no other option.
End of explanation
"""
for file1 in file_list:
if file1['id'] == selection_id:
print('Downloading this file: %s, id: %s createDate: %s' % (file1['title'], file1['id'], file1['createdDate']))
file1.GetContentFile("healthextract/export.zip")
"""
Explanation: Download the file from Google Drive
Ensure that the file downloaded is the latest file generated
End of explanation
"""
zip_ref = zipfile.ZipFile('healthextract/export.zip', 'r')
zip_ref.extractall('healthextract')
zip_ref.close()
"""
Explanation: Unzip the most current file to a holding directory
End of explanation
"""
path = "/Users/samuelcroker/Documents/repositories/RHealthDataImport/healthextract/apple_health_export/export.xml"
e = et.parse(path)
#this was from an older iPhone, to demonstrate how to join files
#legacy = et.parse("/Users/samuelcroker/Documents/repositories/RHealthDataImport/healthextract/apple_health_export/export.xml")
"""
Explanation: Parse Apple Health Export document
End of explanation
"""
pd.Series([el.tag for el in e.iter()]).value_counts()
"""
Explanation: List XML headers by element count
End of explanation
"""
pd.Series([atype.get('type') for atype in e.findall('Record')]).value_counts()
"""
Explanation: List types for "Record" Header
End of explanation
"""
import pytz
#Extract the heartrate values, and get a timestamp from the xml
# there is likely a more efficient way, though this is very fast
def txloc(xdate,fmt):
eastern = pytz.timezone('US/Eastern')
dte = xdate.astimezone(eastern)
return datetime.strftime(dte,fmt)
def xmltodf(eltree, element,outvaluename):
dt = []
v = []
for atype in eltree.findall('Record'):
if atype.get('type') == element:
dt.append(datetime.strptime(atype.get("startDate"),"%Y-%m-%d %H:%M:%S %z"))
v.append(float(atype.get("value")))
myd = pd.DataFrame({"Create":dt,outvaluename:v})
colDict = {"Year":"%Y","Month":"%Y-%m", "Week":"%Y-%U","Day":"%d","Hour":"%H","Days":"%Y-%m-%d","Month-Day":"%m-%d"}
for col, fmt in colDict.items():
myd[col] = myd['Create'].dt.tz_convert('US/Eastern').dt.strftime(fmt)
myd[outvaluename] = myd[outvaluename].astype(float) #.astype(int)
print('Extracting ' + outvaluename + ', type: ' + element)
return(myd)
HR_df = xmltodf(e,"HKQuantityTypeIdentifierHeartRate","HeartRate")
HA_df = xmltodf(e,"HKQuantityTypeIdentifierEnvironmentalAudioExposure","EnvAudio")
EX_df = xmltodf(e,"HKQuantityTypeIdentifierAppleExerciseTime","Extime")
SPO2_df = xmltodf(e,"HKQuantityTypeIdentifierOxygenSaturation","SPO2")
HR_df
#reset plot - just for tinkering
plt.rcParams['figure.figsize'] = 60, 8
HR_df.boxplot(by='Month',column="HeartRate", return_type='axes')
plt.grid(axis='x')
plt.title('All Months')
plt.ylabel('Heart Rate')
plt.ylim(40,140)
dx = HR_df[HR_df['Year']=='2019'].boxplot(by='Week',column="HeartRate", return_type='axes')
plt.title('All Weeks')
plt.ylabel('Heart Rate')
plt.xticks(rotation=90)
plt.grid(axis='x')
[plt.axvline(_x, linewidth=1, color='blue') for _x in [10,12]]
plt.ylim(40,140)
monthval = '2019-10'
#monthval1 = '2017-09'
#monthval2 = '2017-10'
#HR_df[(HR_df['Month']==monthval1) | (HR_df['Month']== monthval2)].boxplot(by='Month-Day',column="HeartRate", return_type='axes')
HR_df[HR_df['Month']==monthval].boxplot(by='Month-Day',column="HeartRate", return_type='axes')
plt.grid(axis='x')
plt.rcParams['figure.figsize'] = 16, 8
plt.title('Daily for Month: '+ monthval)
plt.ylabel('Heart Rate')
plt.xticks(rotation=90)
plt.ylim(40,140)
HR_df[HR_df['Month']==monthval].boxplot(by='Hour',column="HeartRate")
plt.title('Hourly for Month: '+ monthval)
plt.ylabel('Heart Rate')
plt.grid(axis='x')
plt.ylim(40,140)
"""
Explanation: Extract Values to Data Frame
TODO: Abstraction of the next code block
End of explanation
"""
# This isnt efficient yet, just a first swipe. It functions as intended.
def getDelta(res,ttp,cyclelength):
mz = [x if (x >= 0) & (x < cyclelength) else 999 for x in res]
if ttp == 0:
return(mz.index(min(mz))+1)
else:
return(mz[mz.index(min(mz))])
#chemodays = np.array([date(2017,4,24),date(2017,5,16),date(2017,6,6),date(2017,8,14)])
chemodays = np.array([date(2018,1,26),date(2018,2,2),date(2018,2,9),date(2018,2,16),date(2018,2,26),date(2018,3,2),date(2018,3,19),date(2018,4,9),date(2018,5,1),date(2018,5,14),date(2018,6,18),date(2018,7,10),date(2018,8,6)])
HR_df = xmltodf(e,"HKQuantityTypeIdentifierHeartRate","HeartRate")
#I dont think this is efficient yet...
a = HR_df['Create'].apply(lambda x: [x.days for x in x.date()-chemodays])
HR_df['ChemoCycle'] = a.apply(lambda x: getDelta(x,0,21))
HR_df['ChemoDays'] = a.apply(lambda x: getDelta(x,1,21))
import seaborn as sns
plotx = HR_df[HR_df['ChemoDays']<=21]
plt.rcParams['figure.figsize'] = 24, 8
ax = sns.boxplot(x="ChemoDays", y="HeartRate", hue="ChemoCycle", data=plotx, palette="Set2",notch=1,whis=0,width=0.75,showfliers=False)
plt.ylim(65,130)
#the next statement puts the chemodays variable as a rowname, we need to fix that
plotx_med = plotx.groupby('ChemoDays').median()
#this puts chemodays back as a column in the frame. I need to see if there is a way to prevent the effect
plotx_med.index.name = 'ChemoDays'
plotx_med.reset_index(inplace=True)
snsplot = sns.pointplot(x='ChemoDays', y="HeartRate", data=plotx_med,color='Gray')
"""
Explanation: import calmap
ts = pd.Series(HR_df['HeartRate'].values, index=HR_df['Days'])
ts.index = pd.to_datetime(ts.index)
tstot = ts.groupby(ts.index).median()
plt.rcParams['figure.figsize'] = 16, 8
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
calmap.yearplot(data=tstot,year=2017)
Flag Chemotherapy Days for specific analysis
The next two cells provide the ability to introduce cycles that start on specific days and include this data in the datasets so that they can be overlaid in graphics. In the example below, there are three cycles of 21 days. The getDelta function returns the cycle number when tpp == 0 and the days since day 0 when tpp == 2. This allows the overlaying of the cycles, with the days since day 0 being overlaid.
End of explanation
"""
import seaborn as sns
sns.set(style="ticks", palette="muted", color_codes=True)
sns.boxplot(x="Month", y="HeartRate", data=HR_df,whis=np.inf, color="c")
# Add in points to show each observation
snsplot = sns.stripplot(x="Month", y="HeartRate", data=HR_df,jitter=True, size=1, alpha=.15, color=".3", linewidth=0)
hr_only = HR_df[['Create','HeartRate']]
hr_only.tail()
hr_only.to_csv('~/Downloads/stc_hr.csv')
"""
Explanation: Boxplots Using Seaborn
End of explanation
"""
|
cmshobe/landlab
|
notebooks/tutorials/flow_direction_and_accumulation/compare_FlowDirectors.ipynb
|
mit
|
%matplotlib inline
# import plotting tools
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib as mpl
# import numpy
import numpy as np
# import necessary landlab components
from landlab import RasterModelGrid, HexModelGrid
from landlab.components import FlowAccumulator
from landlab.components import(FlowDirectorD8,
FlowDirectorDINF,
FlowDirectorMFD,
FlowDirectorSteepest)
# import landlab plotting functionality
from landlab.plot.drainage_plot import drainage_plot
# create a plotting routine to make a 3d plot of our surface.
def surf_plot(mg, surface='topographic__elevation', title='Surface plot of topography'):
fig = plt.figure()
ax = fig.gca(projection='3d')
# Plot the surface.
Z = mg.at_node[surface].reshape(mg.shape)
color = cm.gray((Z-Z.min())/(Z.max()-Z.min()))
surf = ax.plot_surface(mg.x_of_node.reshape(mg.shape), mg.y_of_node.reshape(mg.shape),
Z,
rstride=1, cstride=1,
facecolors=color,
linewidth=0.,
antialiased=False)
ax.view_init(elev=35, azim=-120)
ax.set_xlabel('X axis')
ax.set_ylabel('Y axis')
ax.set_zlabel('Elevation')
plt.title(title)
plt.show()
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Comparison of FlowDirectors
Introduction
Landlab's topographic flow-routing capability directs flow and accumulates it using two types of components:
FlowDirectors use the topography to determine how flow moves between adjacent nodes. For every node in the grid it determines the node(s) to receive flow, and the proportion of flow to send from one node to its receiver(s).
The FlowAccumulator uses the direction and proportion of flow moving between each node and (optionally) water runoff to calculate drainage area and discharge.
The FlowDirectors are method-specific. Presently landlab supports four different methods for determining flow direction:
FlowDirectorSteepest Flow is routed to only one node. The algorithm considers the link slopes leaving from each node and chooses the steepest downhill link to route flow along. In the case of a raster grid, only the links are considered (Landlab differentiates between links, which never cross and are located at North, South, East, and West on a raster grid, and diagonals which cross and are located at North East, North West, South East, and South West). For raster grids, this method is also known as D4 flow routing. In the case of irregular grids, all links originating from a node are consideded.
FlowDirectorD8 (raster only) Flow is only routed to one node but diagonals are also considered.
FlowDirectorMFD Flow is directed to all nodes that are located downhill of the source node. In the case of a raster grid, diagonals can be included using the keyword diagonals=True. Flow is partitioned between receiver nodes based on the relative slope along the links leading to the receiver nodes. The default method for partitioning is based on the sum of receiver slopes (partition_method='slope'). Partitioning can also be done on the basis of the square root of slope, which gives the result of a steady kinematic wave(partition_method='square_root_of_slope').
FlowDirectorDINF (raster only) Flow is directed to two cells based on the slope of the triangular facets that can be defined between a node and its neighbors. The steepest downhill facet is chosen and then flow is partitioned between the receiver nodes at the bottom of that facet based on the relative slopes along the facet-bounding links. (The method, known as "D-infinity", is described by Tarboton (1997, Water Resources Research, 33(2), 309-319)).
In this tutorial we will go over more detailed examples that contrast the differences between each flow-direction algorithm. For information about how to initialize and run a FlowDirector or the FlowAccumulator, refer to the other tutorials in this section.
First, we import the necessary python modules and make a small plotting routine.
End of explanation
"""
mg1 = RasterModelGrid((10,10))
_ = mg1.add_field('topographic__elevation',
mg1.y_of_node,
at='node')
surf_plot(mg1, title='Grid 1: A basic ramp')
mg2 = RasterModelGrid((10,10))
_ = mg2.add_field('topographic__elevation',
mg2.x_of_node + 2.*mg2.y_of_node,
at = 'node')
surf_plot(mg2, title='Grid 2: A ramp inclined in X and in Y')
mg3 = RasterModelGrid((10,10))
_ = mg3.add_field('topographic__elevation',
mg3.x_of_node**2 + mg3.y_of_node**2 + mg3.y_of_node,
at='node')
surf_plot(mg3, title='Grid 3: A more complicated surface')
"""
Explanation: 3 topographic grids
For this tutorial we will consider three different topographic surfaces that highlight the difference between each of the flow direction algorithms.
End of explanation
"""
mg1a = RasterModelGrid((10,10))
_ = mg1a.add_field('topographic__elevation',
mg1a.y_of_node,
at='node')
fd1a = FlowDirectorSteepest(mg1a, 'topographic__elevation')
fd1a.run_one_step()
plt.figure()
drainage_plot(mg1a, title='Basic Ramp using FlowDirectorSteepest')
"""
Explanation: Comparing the different methods for each grid
We can illustrate the receiver node FlowDirectionSteepest has assigned to each donor node using a plotting function in Landlab called drainage_plot. We will see many of these plots in this tutorial so let's take a moment to walk through the plot and what it contains.
The background image (white to black) shows the values of topographic elevation of the underlying surface.
The color of the dots inside of each pixel show the locations of the nodes and the type of node.
The arrows show the direction of flow, and the color shows the proportion of flow that travels along that link.
An X on top of a node indicates that node is a local sink and flows to itself.
Note that in Landlab boundary nodes, or nodes that are on the edge of a grid do not have area and do not contribute flow to nodes. These nodes can either be Fixed Gradient Nodes, Fixed Value Nodes, or Closed Nodes. With the exception of Closed Nodes the boundary nodes can receive flow.
An important step in all flow direction and accumulation is setting the proper boundary condition. Refer to the boundary condition tutorial for more information.
Grid 1: Basic Ramp
As with the Introduction to Flow Director tutorial, let's start with the basic ramp.
End of explanation
"""
mg1b = RasterModelGrid((10,10))
_ = mg1b.add_field('topographic__elevation',
mg1b.y_of_node,
at='node')
fd1b = FlowDirectorD8(mg1b)
fd1b.run_one_step()
plt.figure()
drainage_plot(mg1b, title='Basic Ramp using FlowDirectorD8')
"""
Explanation: Reassuringly we can see that the flow is being sent from high elevations at the top of the grid to low elevations at the bottom of the grid. We can also see that all of the arrows are yellow, and thus all of the flow is traveling on these links.
Now let's see how the other FlowDirectors direct the flow on this simple grid. We don't need to specify the surface so long as it is the field 'topographic__elevation'.
End of explanation
"""
mg1c = RasterModelGrid((10,10))
_ = mg1c.add_field('topographic__elevation',
mg1c.y_of_node,
at='node')
fd1c = FlowDirectorMFD(mg1c, diagonals=False) # diagonals=False is the default option
fd1c.run_one_step()
plt.figure()
drainage_plot(mg1c, title='Basic Ramp using FlowDirectorMFD without diagonals')
"""
Explanation: For this ramp, the steepest slope is down a link, and not a diagonal, so FlowDirectorD8 gives the same result as FlowDirectorSteepest.
End of explanation
"""
mg1d = RasterModelGrid((10,10))
_ = mg1d.add_field('topographic__elevation',
mg1d.y_of_node,
at='node')
fd1d = FlowDirectorMFD(mg1d, diagonals=True)
fd1d.run_one_step()
plt.figure()
drainage_plot(mg1d, title='Basic Ramp using FlowDirectorMFD with diagonals')
"""
Explanation: Similarly, while there is more than one node below each core node, there is only one node that is connected by a link and not a diagonal. Thus FlowDirectorMFD with the keyword diagonals set to True provides the same results as FlowDirectorSteepest and FlowDirectorD8
End of explanation
"""
mg1e = RasterModelGrid((10,10))
_ = mg1e.add_field('topographic__elevation',
mg1e.y_of_node,
at='node')
fd1e = FlowDirectorDINF(mg1e)
fd1e.run_one_step()
plt.figure()
drainage_plot(mg1e, title='Basic Ramp using FlowDirectorDINF')
"""
Explanation: When we permit flow along diagonal connections between nodes and flow to all downhill nodes, we see a difference in the directing pattern on this simple ramp. The flow is partitioned between the three downhill nodes, and there is more flow being sent to along the link as compared with the diagonals (the links are a lighter color blue than the diagonals).
One issue we might have with the results from FlowDirectorMFD in this case is that the flow on the diagonals crosses. This is one of the problems with using diagonal connections between nodes.
End of explanation
"""
mg2a = RasterModelGrid((10,10))
_ = mg2a.add_field('topographic__elevation',
mg2a.x_of_node + 2.*mg2a.y_of_node,
at = 'node')
fd2a = FlowDirectorSteepest(mg2a, 'topographic__elevation')
fd2a.run_one_step()
plt.figure()
drainage_plot(mg2a, title='Grid 2 using FlowDirectorSteepest')
"""
Explanation: In FlowDirectorDINF flow is partitioned to two nodes based on steepness of the eight triangular facets surrounding each node. The partitioning is based on the relation between the link and diagonal slope that form the edge of the facet and the slope of the facet itself. When one of the facet edges has the same slope as the facet, as is the case in this ramp example, all of the flow is partitioned along that edge.
Grid 2: Inclined plane in two dimentions
Next let's look at all the flow directors but with the inclined plane. Recall that this plane is tilted in both X and Y axes, and that is tilted more steeply in the Y direction.
End of explanation
"""
mg2b = RasterModelGrid((10,10))
_ = mg2b.add_field('topographic__elevation',
mg2b.x_of_node + 2.*mg2b.y_of_node,
at='node')
fd2b = FlowDirectorD8(mg2b)
fd2b.run_one_step()
plt.figure()
drainage_plot(mg2b, title='Grid 2 using FlowDirectorD8')
"""
Explanation: Flow is directed down parallel to to the the Y-axis of the plane. This makes sense in the context of the FlowDirectorSteepest algorithm; it only sends flow to one node, so it an idealized geometry such as the plane in this example, it provides flow direction that is non-realistic.
As we will discuss throughout this tutorial, there are benefits and drawbacks to each FlowDirector algorithm.
End of explanation
"""
mg2c = RasterModelGrid((10,10))
_ = mg2c.add_field('topographic__elevation',
mg2c.x_of_node + 2.*mg2c.y_of_node,
at = 'node')
fd2c = FlowDirectorMFD(mg2c, diagonals=False) # diagonals=False is the default option
fd2c.run_one_step()
plt.figure()
drainage_plot(mg2c, title='Grid 2 using FlowDirectorMFD without diagonals')
"""
Explanation: FlowDirectorD8 consideres the diagonal connections between nodes. As the plane is inclined to the southwest the flow direction looks better here, though as we will see later, sometimes FlowDirectorD8 does non-realistic directing too.
End of explanation
"""
mg2d = RasterModelGrid((10,10))
_ = mg2d.add_field('topographic__elevation',
mg2d.x_of_node + 2.*mg2d.y_of_node,
at='node')
fd2d = FlowDirectorMFD(mg2d, diagonals=True)
fd2d.run_one_step()
plt.figure()
drainage_plot(mg2d, title='Grid 2 using FlowDirectorMFD with diagonals')
"""
Explanation: As FlowDirectorMFD can send flow to all the nodes downhill it doesn't have the same problem that FlowDirectorSteepest had. Because the plane is tilted down more steeply to the south than to the east, it sends more flow on the steeper link.
End of explanation
"""
mg2e = RasterModelGrid((10,10))
_ = mg2e.add_field('topographic__elevation',
mg2e.x_of_node + 2.*mg2e.y_of_node,
at='node')
fd2e = FlowDirectorDINF(mg2e)
fd2e.run_one_step()
plt.figure()
drainage_plot(mg2e, title='Basic Ramp using FlowDirectorDINF')
"""
Explanation: When FlowDirectorMFD considers diagonals in addition to links, we see that it sends the flow to four nodes instead of three. While all of the receiver nodes are downhill from their donor nodes, we see again that using diagonals permits flow to cross itself. We also see that the most flow is routed to the south and the south east, which makes sense based on how the plane is tilted.
End of explanation
"""
mg3a = RasterModelGrid((10,10))
_ = mg3a.add_field('topographic__elevation',
mg3a.x_of_node**2 + mg3a.y_of_node**2 + mg3a.y_of_node,
at = 'node')
fd3a = FlowDirectorSteepest(mg3a, 'topographic__elevation')
fd3a.run_one_step()
plt.figure()
drainage_plot(mg3a, title='Grid 3 using FlowDirectorSteepest')
"""
Explanation: Here FlowDirectorDINF routes flow in two directions, to the south and southeast. The plane is steeper to from north to south than from east to west and so more flow is directed on the diagonal to the southeast.
Grid 3: Curved surface
Finally, let's consider our curved surface.
End of explanation
"""
mg3b = RasterModelGrid((10,10))
_ = mg3b.add_field('topographic__elevation',
mg3b.x_of_node**2 + mg3b.y_of_node**2 + mg3b.y_of_node,
at = 'node')
fd3b = FlowDirectorD8(mg3b)
fd3b.run_one_step()
plt.figure()
drainage_plot(mg3b, title='Grid 3 using FlowDirectorD8')
"""
Explanation: Flow on this surface using FlowDirectorSteepest looks realistic, as flow is routed down into the bottom of the curved surface.
End of explanation
"""
mg3c = RasterModelGrid((10,10))
_ = mg3c.add_field('topographic__elevation',
mg3c.x_of_node**2 + mg3c.y_of_node**2 + mg3c.y_of_node,
at='node')
fd3c = FlowDirectorMFD(mg3c, diagonals=False) # diagonals=False is the default option
fd3c.run_one_step()
plt.figure()
drainage_plot(mg3c, title='Grid 3 using FlowDirectorMFD without diagonals')
"""
Explanation: Near the bottom left of the grid, the steepest descent is on a diagonal, so using FlowDirectorD8 gives a different drainage pattern.
End of explanation
"""
mg3d = RasterModelGrid((10,10))
_ = mg3d.add_field('topographic__elevation',
mg3d.x_of_node**2 + mg3d.y_of_node**2 + mg3d.y_of_node,
at='node')
fd3d = FlowDirectorMFD(mg3d, diagonals=True)
fd3d.run_one_step()
plt.figure()
drainage_plot(mg3d, title='Grid 3 using FlowDirectorMFD with diagonals')
"""
Explanation: Permitting multiple receivers with and without diagonals give an additional two different drainage patterns.
End of explanation
"""
mg3e = RasterModelGrid((10,10))
_ = mg3e.add_field('topographic__elevation',
mg3e.x_of_node**2 + mg3e.y_of_node**2 + mg3e.y_of_node,
at='node')
fd3e = FlowDirectorDINF(mg3e)
fd3e.run_one_step()
plt.figure()
drainage_plot(mg3e, title='Grid 3 using FlowDirectorDINF')
"""
Explanation: Again we see flow paths crossing when we permit consideration of flow along the diagonals.
End of explanation
"""
from landlab.components import FlowAccumulator
mg3 = RasterModelGrid((10,10))
_ = mg3.add_field('topographic__elevation',
mg3.x_of_node**2 + mg3.y_of_node**2 + mg3.y_of_node,
at='node')
fa = FlowAccumulator(mg3, 'topographic__elevation', flow_director='Steepest')
fa.run_one_step()
plt.figure()
drainage_plot(mg3, 'drainage_area', title='Flow Accumulation using FlowDirectorSteepest')
"""
Explanation: Finally we see yet a different drainage pattern when we use FlowDirectorDINF and flow is routed along an adjacent diagonal-link pair.
Comparison of Accumulated Area
Before concluding, let's examine the accumulated drainage area using each of the FlowDirector methods and the third grid. For an introduction to creating and running a FlowAccumulator see the tutorial "Introduction to Flow Accumulators".
Often we do flow routing and accumulation because we want to use the accumulated area as a proxy for the water discharge. So the details of how the flow is routed are important because they influence how the drainage area pattern evolves.
Lets begain with FlowDirectorSteepest.
End of explanation
"""
fa = FlowAccumulator(mg3, 'topographic__elevation', flow_director='D8')
fa.run_one_step()
plt.figure()
drainage_plot(mg3, 'drainage_area', title='Flow Accumulation using FlowDirectorD8')
"""
Explanation: Here we see that flow has accumulated into one channel in the bottom of the curved surface.
End of explanation
"""
mg3 = RasterModelGrid((10,10))
_ = mg3.add_field('topographic__elevation',
mg3.x_of_node**2 + mg3.y_of_node**2 + mg3.y_of_node,
at='node')
fa = FlowAccumulator(mg3, 'topographic__elevation', flow_director='MFD')
fa.run_one_step()
plt.figure()
drainage_plot(mg3, 'drainage_area', title='Flow Accumulation using FlowDirectorMFD without diagonals')
"""
Explanation: When diagonals are considered, as in FlowDirectorD8, the drainage patter looks very diferent. Instead of one channel we have two smaller channels.
End of explanation
"""
mg3 = RasterModelGrid((10,10))
_ = mg3.add_field('topographic__elevation',
mg3.x_of_node**2 + mg3.y_of_node**2 + mg3.y_of_node,
at='node')
fa = FlowAccumulator(mg3, 'topographic__elevation', flow_director='MFD', diagonals=True)
fa.run_one_step()
plt.figure()
drainage_plot(mg3, 'drainage_area', title='Flow Accumulation using FlowDirectorMFD with diagonals')
"""
Explanation: Flow is distributed much more when we use FlowDirectorMFD.
End of explanation
"""
mg3 = RasterModelGrid((10,10))
_ = mg3.add_field('topographic__elevation',
mg3.x_of_node**2 + mg3.y_of_node**2 + mg3.y_of_node,
at='node')
fa = FlowAccumulator(mg3, 'topographic__elevation', flow_director='DINF')
fa.run_one_step()
plt.figure()
drainage_plot(mg3, 'drainage_area', title='Flow Accumulation using FlowDirectorDINF')
"""
Explanation: Adding diagonals to FlowDirectorMFD gives a channel somewhat similar to the one created by FlowDirectorSteepest but much more distributed.
End of explanation
"""
|
tritemio/pybroom
|
doc/notebooks/pybroom-example.ipynb
|
mit
|
import numpy as np
from numpy import sqrt, pi, exp, linspace
from lmfit import Model
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
import lmfit
print('lmfit: %s' % lmfit.__version__)
import pybroom as br
"""
Explanation: PyBroom Example - Simple
This notebook is part of pybroom.
This notebook shows the simplest usage of pybroom when performing
a curve fit of a single dataset. Possible applications are only hinted.
For a more complex (and interesting!) example using multiple datasets see
pybroom-example-multi-datasets.
End of explanation
"""
x = np.linspace(-10, 10, 101)
peak1 = lmfit.models.GaussianModel(prefix='p1_')
peak2 = lmfit.models.GaussianModel(prefix='p2_')
model = peak1 + peak2
params = model.make_params(p1_amplitude=1, p2_amplitude=1,
p1_sigma=1, p2_sigma=1)
y_data = model.eval(x=x, p1_center=-1, p2_center=2, p1_sigma=0.5, p2_sigma=1, p1_amplitude=1, p2_amplitude=2)
y_data.shape
y_data += np.random.randn(*y_data.shape)/10
plt.plot(x, y_data)
"""
Explanation: Create Noisy Data
End of explanation
"""
params = model.make_params(p1_center=0, p2_center=3,
p1_sigma=0.5, p2_sigma=1,
p1_amplitude=1, p2_amplitude=2)
result = model.fit(y_data, x=x, params=params)
"""
Explanation: Model Fitting
End of explanation
"""
print(result.fit_report())
result.params.pretty_print()
"""
Explanation: Fit result from an lmfit Model can be inspected with
with fit_report or params.pretty_print():
End of explanation
"""
dg = br.glance(result)
dg.drop('model', 1).drop('message', 1)
"""
Explanation: These methods a re convenient but extracting the data
from the lmfit object requires some work and the knowledge
of lmfit object structure.
pybroom comes to help, extracting data from fit results and
returning pandas DataFrame in tidy format that can be
much more easily manipulated, filtered and plotted.
Glance
Glancing at the fit results (dropping some verbose columns):
End of explanation
"""
dt = br.tidy(result)
dt
"""
Explanation: The glance
function returns a DataFrame with one row per fit-result object.
Application Idea
If you fit N models to the same dataset
you can compare statistics such as reduced-$\chi^2$
Or, fitting several with several methods (and datasets) you
can study the convergence properties using reduced-$\chi^2$,
number of function evaluation and success rate.
Tidy
Tidy fit results for all the parameters:
End of explanation
"""
dt.loc[dt.name == 'p1_center']
"""
Explanation: The tidy function returns one row for each parameter.
End of explanation
"""
da = br.augment(result)
da.head()
"""
Explanation: Augment
Tidy dataframe with data function of the independent variable ('x'). Columns include
the data being fitted, best fit, best fit components, residuals, etc.
End of explanation
"""
d = br.augment(result)
fig, ax = plt.subplots(2, 1, figsize=(7, 8))
ax[1].plot('x', 'data', data=d, marker='o', ls='None')
ax[1].plot('x', "Model(gaussian, prefix='p1_')", data=d, lw=2, ls='--')
ax[1].plot('x', "Model(gaussian, prefix='p2_')", data=d, lw=2, ls='--')
ax[1].plot('x', 'best_fit', data=d, lw=2)
ax[0].plot('x', 'residual', data=d);
"""
Explanation: The augment function returns one row for each data point.
End of explanation
"""
|
RNAer/Calour
|
doc/source/notebooks/microbiome_databases.ipynb
|
bsd-3-clause
|
import calour as ca
ca.set_log_level(11)
%matplotlib notebook
"""
Explanation: Calour microbiome databases interface tutorial
Setup
End of explanation
"""
cfs=ca.read_amplicon('data/chronic-fatigue-syndrome.biom',
'data/chronic-fatigue-syndrome.sample.txt',
normalize=10000,min_reads=1000)
"""
Explanation: Load the data
We will use the Chronic faitigue syndrome data from:
Giloteaux, L., Goodrich, J.K., Walters, W.A., Levine, S.M., Ley, R.E. and Hanson, M.R., 2016.
Reduced diversity and altered composition of the gut microbiome in individuals with myalgic encephalomyelitis/chronic fatigue syndrome.
Microbiome, 4(1), p.30.
End of explanation
"""
cfs=cfs.filter_abundance(10)
cfs=cfs.cluster_features()
cfs=cfs.sort_samples('Subject')
"""
Explanation: preprocess
remove non-interesting bacteria, cluster bacteria and sort samples by disease status
End of explanation
"""
cfs.plot(sample_field='Subject',gui='jupyter')
"""
Explanation: Viewing database annotations
in the interactive heatmap, when clicking on a bacteria, we get a list of all database results about the selected bacteria.
We can choose which databases to use by the databases=['dbbact',...] parameter. The possible databases depend on which database modules were installed.
Currently, supported microbiome database interfaces include:
dbBact - a community database for manual annotations about bacteria (interface installation instruction at dbbact-calour).
SpongeEMP - an automatic database for sea sponge samples (interface installation instruction at spongeworld-calour).
phenoDB - phenotypic information about selected bacteria (interface installation instruction at pheno-calour).
By default, calour uses the dbBact database for microbiome data
End of explanation
"""
dd=cfs.diff_abundance(field='Subject',val1='Control',val2='Patient', random_seed=2018)
"""
Explanation: dbBact enrichment of selected bacteria
By selecting a set of bacteria (using the shift+click or ctrl+click) and choosing the "Enrichment" button, we can get a list of terms that are significantly enriched in the selected bacteria compared to the rest of the bacteria in the plot
Adding dbBact annotations
(Only possible using the gui='qt5' GUI)
To add a new annotation to the selected set of bacteria, choose the "Annotate" button.
Detailed instructions are available at the dbBact.org website.
Differential abundance
To find the bacteria significantly different between samples with 'Control' (healthy) and 'Patient' (sick) in the 'Subject' field.
End of explanation
"""
dd.plot(sample_field='Subject', gui='jupyter', databases=['dbbact','sponge'],bary_fields=['_calour_direction'])
"""
Explanation: Plot the significant bacteria
When clicking on a bacteria, we'll get both dbBact, SpongeEMP, and phenoDB information
End of explanation
"""
ax, enriched=dd.plot_diff_abundance_enrichment()
"""
Explanation: dbBact term enrichment (diff_abundance_enrichment)
We can ask what is special in the bacteria significanly higher in the Control vs. the Patient group and vice versa.
Note since we need to get the per-feature annotations from dbBact, we need a live internet connection to run this command.
Default parameters
End of explanation
"""
enriched.feature_metadata
"""
Explanation: The enriched terms are in a calour experiment class (terms are features, bacteria are samples), so we can see the
list of enriched terms with the p-value (pval) and effect size (odif)
End of explanation
"""
enriched.plot(gui='jupyter', databases=[], feature_field='term',sample_field='group',
yticklabel_kwargs={'rotation': 0, 'size': 7})
"""
Explanation: We can plot the enriched terms heatmap to see the term scores for each bacteria.
Note now rows are the bacteria and columns are the terms
End of explanation
"""
dbbact=ca.database._get_database_class('dbbact')
term_info_exp = dbbact.show_term_details_diff('small village',dd,gui='jupyter')
"""
Explanation: Look at the behavior of a single term
We want to see all the annotations where a given term appears, and see what bacteria from either group (CFS or healthy) appear in that annotations.
To do this, we use dbbact.show_term_details_diff(). The output of this function is an experiment where each COLUMN is a bacteria, and each row is an annotation. We see whether each bacteria appears in the annotation. Color indicates the annotation type.
End of explanation
"""
ax, enriched=dd.plot_diff_abundance_enrichment(term_type='annotation')
enriched.feature_metadata
"""
Explanation: getting enriched annotations instead of terms
Each annotation is coming from a single experiment (as opposed to terms that can come from annotations in multiple experiment)
End of explanation
"""
ax, enriched=dd.plot_diff_abundance_enrichment(term_type='combined')
enriched.feature_metadata
"""
Explanation: Getting both enriched terms and annotations
End of explanation
"""
ax, enriched=dd.plot_diff_abundance_enrichment(term_type='combined', ignore_exp=[12])
"""
Explanation: Ignoring selected experiments already in dbBact
If our experiment is already in dbBact, or if there are other experiments in dbBact we do not want to include in the enrichment analysis, we can specify them using the ignore_exp=[expID,...] parameter.
In our case, the cfs experiment is already added to dbBact, so let's ignore it's annotations when doing the analysis. By looking at dbBact.org we know its experimentID is 12. Alternatively we can use ignore_exp=True to automatically detect the current experimentID if it exists in dbBact (using the data and mapping file md5 hash).
End of explanation
"""
cfs=cfs.add_terms_to_features(dbname='dbbact',use_term_list=['feces','saliva','skin','mus musculus'])
tt=cfs.sort_by_metadata('common_term',axis='feature')
tt.plot(sample_field='Subject', bary_fields=['common_term'], gui='jupyter')
"""
Explanation: Adding common dbBact terms to features (add_terms_to_features)
We can attach to each bacteria the most common dbBact term associated with it.
The terms are selected from all of the dbBact terms, or can be selected from a supplied list.
End of explanation
"""
dbbact=ca.database._get_database_class('dbbact')
enriched=dbbact.sample_enrichment(cfs,'Subject','Control','Patient',
term_type='combined',ignore_exp=[12])
enriched.feature_metadata
"""
Explanation: Get enriched terms using all bacteria
Instead of just comparing the bacteria enriched in the two groups (and then comparing terms between them), we can do a weighted term average for each group using all bacteria (weighing the terms of each bacteria by its' frequency in the sample). This can work if we don't have a strong set of bacteria separating between the two groups.
End of explanation
"""
|
kpn-advanced-analytics/modelFactoryPy
|
Template/Template_Aster.ipynb
|
mit
|
registry.register('aster', 'sqlalchemy_mf_aster.jdbc', 'AsterDialect_jdbc')
main.getConnection('aster')
# this will also create main.engine variable
model_id = 'titanic_training'
#main.addModelId('titanic_training','Training on titanic data','passengerid')
main.getSessionId(model_id)
# this will also create main.session_id variable
"""
Explanation: 1. Get connection; define model_id, get session id
End of explanation
"""
df = pd.read_csv('../data/titanic.csv')
df.head()
"""
Explanation: 2. Load the data
Could be the data from postgres SQL as well -> you would use pd.read_sql
End of explanation
"""
summary = get.getSummary(df)
summary.head()
store.storeSummary(summary)
## as we can see, the data is actually stored
pull.pullSummary(main.session_id)
"""
Explanation: 3. Get and store the summary
End of explanation
"""
y = df['survived_int']
X = df[['sex','pclass','embarked','title','age','family']]
X.index = df["passengerid"].tolist()
def preprocess_features(X):
# Initialize new output DataFrame
output = pd.DataFrame(index = X.index)
# Investigate each feature column for the data
for col, col_data in X.iteritems():
# If data type is categorical, convert to dummy variables
if col_data.dtype == object:
col_data = pd.get_dummies(col_data, prefix = col)
# Collect the revised columns
output = output.join(col_data)
return output
X = preprocess_features(X)
random.seed(0)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=0)
clf = RandomForestClassifier(random_state=0) # just a basic random forest model
clf.fit(X_train, y_train)
## predict on the test set:
probs = clf.predict_proba(X_test)
score=[probs[x][1] for x in range(len(probs)) ]
"""
Explanation: 4. Create features matrix, train and test set, build a model on the training set -> predict
End of explanation
"""
test_results = get.getTestResults(score, y_test)
test_results.head(10)
store.storeTestResults(test_results)
## as we can see, the data is actually stored
pull.pullTestResults(main.session_id).head()
roc = pull.pullROC(main.session_id).sort('population', ascending = 1)
liftchart = pull.pullLiftChart(main.session_id).sort('population', ascending = 1)
fg = plt.figure(figsize=(10,5))
adj = plt.subplots_adjust(hspace=0.4,wspace=0.2)
sp = plt.subplot(1,2,1)
l1 = plt.plot(roc.false_positive_rate, roc.true_positive_rate)
tl = plt.title("ROC curve")
sp = plt.subplot(1,2,2)
l1 = plt.plot(liftchart.population, liftchart.target_population)
tl = plt.title("Liftchart")
plt.show()
pull.pullAccuracy(main.session_id, 0.5, 'population')
pull.pullAccuracy(main.session_id, 0.5, 'probability')
pull.pullConfMatrix(main.session_id, 0.5, 'probability')
"""
Explanation: 5. getTestResults; storeTestResults; pull some statistics
End of explanation
"""
store.storeModelScores(X_test.index, score)
## as we can see, the data is actually stored
pull.pullModelScores(main.session_id).head()
"""
Explanation: 6. Store the scores
End of explanation
"""
main.closeSession()
## as we can see, the end time is filled in
connection = main.engine.connect()
a = connection.execute("select * from model_factory.run_history where session_id='"+main.session_id+"'")
b = a.fetchall()
pd.DataFrame.from_records(b, columns=a.keys())
"""
Explanation: 7. Close the session
End of explanation
"""
|
piskvorky/gensim
|
docs/notebooks/pivoted_document_length_normalisation.ipynb
|
lgpl-2.1
|
#
# Download our dataset
#
import gensim.downloader as api
nws = api.load("20-newsgroups")
#
# Pick texts from relevant newsgroups, split into training and test set.
#
cat1, cat2 = ('sci.electronics', 'sci.space')
#
# X_* contain the actual texts as strings.
# Y_* contain labels, 0 for cat1 (sci.electronics) and 1 for cat2 (sci.space)
#
X_train = []
X_test = []
y_train = []
y_test = []
for i in nws:
if i["set"] == "train" and i["topic"] == cat1:
X_train.append(i["data"])
y_train.append(0)
elif i["set"] == "train" and i["topic"] == cat2:
X_train.append(i["data"])
y_train.append(1)
elif i["set"] == "test" and i["topic"] == cat1:
X_test.append(i["data"])
y_test.append(0)
elif i["set"] == "test" and i["topic"] == cat2:
X_test.append(i["data"])
y_test.append(1)
from gensim.parsing.preprocessing import preprocess_string
from gensim.corpora import Dictionary
id2word = Dictionary([preprocess_string(doc) for doc in X_train])
train_corpus = [id2word.doc2bow(preprocess_string(doc)) for doc in X_train]
test_corpus = [id2word.doc2bow(preprocess_string(doc)) for doc in X_test]
print(len(X_train), len(X_test))
# We perform our analysis on top k documents which is almost top 10% most scored documents
k = len(X_test) // 10
from gensim.sklearn_api.tfidf import TfIdfTransformer
from sklearn.linear_model import LogisticRegression
from gensim.matutils import corpus2csc
# This function returns the model accuracy and indivitual document prob values using
# gensim's TfIdfTransformer and sklearn's LogisticRegression
def get_tfidf_scores(kwargs):
tfidf_transformer = TfIdfTransformer(**kwargs).fit(train_corpus)
X_train_tfidf = corpus2csc(tfidf_transformer.transform(train_corpus), num_terms=len(id2word)).T
X_test_tfidf = corpus2csc(tfidf_transformer.transform(test_corpus), num_terms=len(id2word)).T
clf = LogisticRegression().fit(X_train_tfidf, y_train)
model_accuracy = clf.score(X_test_tfidf, y_test)
doc_scores = clf.decision_function(X_test_tfidf)
return model_accuracy, doc_scores
"""
Explanation: Pivoted Document Length Normalization
Background
In many cases, normalizing the tfidf weights for each term favors weight of terms of the documents with shorter length. The pivoted document length normalization scheme counters the effect of this bias for short documents by making tfidf independent of the document length.
This is achieved by tilting the normalization curve along the pivot point defined by user with some slope.
Roughly following the equation:
pivoted_norm = (1 - slope) * pivot + slope * old_norm
This scheme is proposed in the paper Pivoted Document Length Normalization by Singhal, Buckley and Mitra.
Overall this approach can in many cases help increase the accuracy of the model where the document lengths are hugely varying in the entire corpus.
Introduction
This guide demonstrates how to perform pivoted document length normalization.
We will train a logistic regression to distinguish between text from two different newsgroups.
Our results will show that using pivoted document length normalization yields a better model (higher classification accuracy).
End of explanation
"""
params = {}
model_accuracy, doc_scores = get_tfidf_scores(params)
print(model_accuracy)
import numpy as np
# Sort the document scores by their scores and return a sorted list
# of document score and corresponding document lengths.
def sort_length_by_score(doc_scores, X_test):
doc_scores = sorted(enumerate(doc_scores), key=lambda x: x[1])
doc_leng = np.empty(len(doc_scores))
ds = np.empty(len(doc_scores))
for i, _ in enumerate(doc_scores):
doc_leng[i] = len(X_test[_[0]])
ds[i] = _[1]
return ds, doc_leng
print(
"Normal cosine normalisation favors short documents as our top {} "
"docs have a smaller mean doc length of {:.3f} compared to the corpus mean doc length of {:.3f}"
.format(
k, sort_length_by_score(doc_scores, X_test)[1][:k].mean(),
sort_length_by_score(doc_scores, X_test)[1].mean()
)
)
"""
Explanation: Get TFIDF scores for corpus without pivoted document length normalisation
End of explanation
"""
best_model_accuracy = 0
optimum_slope = 0
for slope in np.arange(0, 1.1, 0.1):
params = {"pivot": 10, "slope": slope}
model_accuracy, doc_scores = get_tfidf_scores(params)
if model_accuracy > best_model_accuracy:
best_model_accuracy = model_accuracy
optimum_slope = slope
print("Score for slope {} is {}".format(slope, model_accuracy))
print("We get best score of {} at slope {}".format(best_model_accuracy, optimum_slope))
params = {"pivot": 10, "slope": optimum_slope}
model_accuracy, doc_scores = get_tfidf_scores(params)
print(model_accuracy)
print(
"With pivoted normalisation top {} docs have mean length of {:.3f} "
"which is much closer to the corpus mean doc length of {:.3f}"
.format(
k, sort_length_by_score(doc_scores, X_test)[1][:k].mean(),
sort_length_by_score(doc_scores, X_test)[1].mean()
)
)
"""
Explanation: Get TFIDF scores for corpus with pivoted document length normalisation testing on various values of alpha.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as py
best_model_accuracy = 0
optimum_slope = 0
w = 2
h = 2
f, axarr = py.subplots(h, w, figsize=(15, 7))
it = 0
for slope in [1, 0.2]:
params = {"pivot": 10, "slope": slope}
model_accuracy, doc_scores = get_tfidf_scores(params)
if model_accuracy > best_model_accuracy:
best_model_accuracy = model_accuracy
optimum_slope = slope
doc_scores, doc_leng = sort_length_by_score(doc_scores, X_test)
y = abs(doc_scores[:k, np.newaxis])
x = doc_leng[:k, np.newaxis]
py.subplot(1, 2, it+1).bar(x, y, width=20, linewidth=0)
py.title("slope = " + str(slope) + " Model accuracy = " + str(model_accuracy))
py.ylim([0, 4.5])
py.xlim([0, 3200])
py.xlabel("document length")
py.ylabel("confidence score")
it += 1
py.tight_layout()
py.show()
"""
Explanation: Visualizing the pivoted normalization
Since cosine normalization favors retrieval of short documents from the plot we can see that when slope was 1 (when pivoted normalisation was not applied) short documents with length of around 500 had very good score hence the bias for short documents can be seen. As we varied the value of slope from 1 to 0 we introdcued a new bias for long documents to counter the bias caused by cosine normalisation. Therefore at a certain point we got an optimum value of slope which is 0.5 where the overall accuracy of the model is increased.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.19/_downloads/e71fac7e5d7784759a26529dd6e63da5/plot_whitened.ipynb
|
bsd-3-clause
|
import mne
from mne.datasets import sample
"""
Explanation: Plotting whitened data
This tutorial demonstrates how to plot whitened evoked data.
Data are whitened for many processes, including dipole fitting, source
localization and some decoding algorithms. Viewing whitened data thus gives
a different perspective on the data that these algorithms operate on.
Let's start by loading some data and computing a signal (spatial) covariance
that we'll consider to be noise.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
events = mne.find_events(raw, stim_channel='STI 014')
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id=event_id, reject=reject)
# baseline noise cov, not a lot of samples
noise_cov = mne.compute_covariance(epochs, tmax=0., method='shrunk', rank=None,
verbose='error')
# butterfly mode shows the differences most clearly
raw.plot(events=events, butterfly=True)
raw.plot(noise_cov=noise_cov, events=events, butterfly=True)
"""
Explanation: Raw data with whitening
<div class="alert alert-info"><h4>Note</h4><p>In the :meth:`mne.io.Raw.plot` with ``noise_cov`` supplied,
you can press they "w" key to turn whitening on and off.</p></div>
End of explanation
"""
epochs.plot()
epochs.plot(noise_cov=noise_cov)
"""
Explanation: Epochs with whitening
End of explanation
"""
evoked = epochs.average()
evoked.plot(time_unit='s')
evoked.plot(noise_cov=noise_cov, time_unit='s')
"""
Explanation: Evoked data with whitening
End of explanation
"""
evoked.plot_white(noise_cov=noise_cov, time_unit='s')
"""
Explanation: Evoked data with scaled whitening
The :meth:mne.Evoked.plot_white function takes an additional step of
scaling the whitened plots to show how well the assumption of Gaussian
noise is satisfied by the data:
End of explanation
"""
evoked.comment = 'All trials'
evoked.plot_topo(title='Evoked data')
evoked.plot_topo(noise_cov=noise_cov, title='Whitened evoked data')
"""
Explanation: Topographic plot with whitening
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/nasa-giss/cmip6/models/sandbox-2/atmoschem.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-2', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
wei-Z/Python-Machine-Learning
|
code/ch13/ch13.ipynb
|
mit
|
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,matplotlib,theano,keras
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
"""
Explanation: Sebastian Raschka, 2015
https://github.com/rasbt/python-machine-learning-book
Python Machine Learning - Code Examples
Chapter 13 - Parallelizing Neural Network Training with Theano
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
"""
from IPython.display import Image
"""
Explanation: Overview
Building, compiling, and running expressions with Theano
What is Theano?
First steps with Theano
Configuring Theano
Working with array structures
Wrapping things up – a linear regression example
Choosing activation functions for feedforward neural networks
Logistic function recap
Estimating probabilities in multi-class classification via the softmax function
Broadening the output spectrum by using a hyperbolic tangent
Training neural networks efficiently using Keras
Summary
<br>
<br>
End of explanation
"""
Image(filename='./images/13_01.png', width=500)
"""
Explanation: Building, compiling, and running expressions with Theano
Depending on your system setup, it is typically sufficient to install Theano via
pip install Theano
For more help with the installation, please see: http://deeplearning.net/software/theano/install.html
End of explanation
"""
import theano
from theano import tensor as T
# initialize
x1 = T.scalar()
w1 = T.scalar()
w0 = T.scalar()
z1 = w1 * x1 + w0
# compile
net_input = theano.function(inputs=[w1, x1, w0], outputs=z1)
# execute
net_input(2.0, 1.0, 0.5)
"""
Explanation: <br>
<br>
What is Theano?
...
First steps with Theano
Introducing the TensorType variables. For a complete list, see http://deeplearning.net/software/theano/library/tensor/basic.html#all-fully-typed-constructors
End of explanation
"""
print(theano.config.floatX)
theano.config.floatX = 'float32'
"""
Explanation: <br>
<br>
Configuring Theano
Configuring Theano. For more options, see
- http://deeplearning.net/software/theano/library/config.html
- http://deeplearning.net/software/theano/library/floatX.html
End of explanation
"""
print(theano.config.device)
"""
Explanation: To change the float type globally, execute
export THEANO_FLAGS=floatX=float32
in your bash shell. Or execute Python script as
THEANO_FLAGS=floatX=float32 python your_script.py
Running Theano on GPU(s). For prerequisites, please see: http://deeplearning.net/software/theano/tutorial/using_gpu.html
Note that float32 is recommended for GPUs; float64 on GPUs is currently still relatively slow.
End of explanation
"""
import numpy as np
# initialize
# if you are running Theano on 64 bit mode,
# you need to use dmatrix instead of fmatrix
x = T.fmatrix(name='x')
x_sum = T.sum(x, axis=0)
# compile
calc_sum = theano.function(inputs=[x], outputs=x_sum)
# execute (Python list)
ary = [[1, 2, 3], [1, 2, 3]]
print('Column sum:', calc_sum(ary))
# execute (NumPy array)
ary = np.array([[1, 2, 3], [1, 2, 3]], dtype=theano.config.floatX)
print('Column sum:', calc_sum(ary))
"""
Explanation: You can run a Python script on CPU via:
THEANO_FLAGS=device=cpu,floatX=float64 python your_script.py
or GPU via
THEANO_FLAGS=device=gpu,floatX=float32 python your_script.py
It may also be convenient to create a .theanorc file in your home directory to make those configurations permanent. For example, to always use float32, execute
echo -e "\n[global]\nfloatX=float32\n" >> ~/.theanorc
Or, create a .theanorc file manually with the following contents
[global]
floatX = float32
device = gpu
<br>
<br>
Working with array structures
End of explanation
"""
# initialize
x = T.fmatrix(name='x')
w = theano.shared(np.asarray([[0.0, 0.0, 0.0]],
dtype=theano.config.floatX))
z = x.dot(w.T)
update = [[w, w + 1.0]]
# compile
net_input = theano.function(inputs=[x],
updates=update,
outputs=z)
# execute
data = np.array([[1, 2, 3]], dtype=theano.config.floatX)
for i in range(5):
print('z%d:' % i, net_input(data))
"""
Explanation: Updating shared arrays.
More info about memory management in Theano can be found here: http://deeplearning.net/software/theano/tutorial/aliasing.html
End of explanation
"""
# initialize
data = np.array([[1, 2, 3]],
dtype=theano.config.floatX)
x = T.fmatrix(name='x')
w = theano.shared(np.asarray([[0.0, 0.0, 0.0]],
dtype=theano.config.floatX))
z = x.dot(w.T)
update = [[w, w + 1.0]]
# compile
net_input = theano.function(inputs=[],
updates=update,
givens={x: data},
outputs=z)
# execute
for i in range(5):
print('z:', net_input())
"""
Explanation: We can use the givens variable to insert values into the graph before compiling it. Using this approach we can reduce the number of transfers from RAM (via CPUs) to GPUs to speed up learning with shared variables. If we use inputs, a datasets is transferred from the CPU to the GPU multiple times, for example, if we iterate over a dataset multiple times (epochs) during gradient descent. Via givens, we can keep the dataset on the GPU if it fits (e.g., a mini-batch).
End of explanation
"""
import numpy as np
X_train = np.asarray([[0.0], [1.0], [2.0], [3.0], [4.0],
[5.0], [6.0], [7.0], [8.0], [9.0]],
dtype=theano.config.floatX)
y_train = np.asarray([1.0, 1.3, 3.1, 2.0, 5.0,
6.3, 6.6, 7.4, 8.0, 9.0],
dtype=theano.config.floatX)
"""
Explanation: <br>
<br>
Wrapping things up: A linear regression example
Creating some training data.
End of explanation
"""
import theano
from theano import tensor as T
import numpy as np
def train_linreg(X_train, y_train, eta, epochs):
costs = []
# Initialize arrays
eta0 = T.fscalar('eta0')
y = T.fvector(name='y')
X = T.fmatrix(name='X')
w = theano.shared(np.zeros(
shape=(X_train.shape[1] + 1),
dtype=theano.config.floatX),
name='w')
# calculate cost
net_input = T.dot(X, w[1:]) + w[0]
errors = y - net_input
cost = T.sum(T.pow(errors, 2))
# perform gradient update
gradient = T.grad(cost, wrt=w)
update = [(w, w - eta0 * gradient)]
# compile model
train = theano.function(inputs=[eta0],
outputs=cost,
updates=update,
givens={X: X_train,
y: y_train,})
for _ in range(epochs):
costs.append(train(eta))
return costs, w
"""
Explanation: Implementing the training function.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
costs, w = train_linreg(X_train, y_train, eta=0.001, epochs=10)
plt.plot(range(1, len(costs)+1), costs)
plt.tight_layout()
plt.xlabel('Epoch')
plt.ylabel('Cost')
plt.tight_layout()
# plt.savefig('./figures/cost_convergence.png', dpi=300)
plt.show()
"""
Explanation: Plotting the sum of squared errors cost vs epochs.
End of explanation
"""
def predict_linreg(X, w):
Xt = T.matrix(name='X')
net_input = T.dot(Xt, w[1:]) + w[0]
predict = theano.function(inputs=[Xt], givens={w: w}, outputs=net_input)
return predict(X)
plt.scatter(X_train, y_train, marker='s', s=50)
plt.plot(range(X_train.shape[0]),
predict_linreg(X_train, w),
color='gray',
marker='o',
markersize=4,
linewidth=3)
plt.xlabel('x')
plt.ylabel('y')
plt.tight_layout()
# plt.savefig('./figures/linreg.png', dpi=300)
plt.show()
"""
Explanation: Making predictions.
End of explanation
"""
# note that first element (X[0] = 1) to denote bias unit
X = np.array([[1, 1.4, 1.5]])
w = np.array([0.0, 0.2, 0.4])
def net_input(X, w):
z = X.dot(w)
return z
def logistic(z):
return 1.0 / (1.0 + np.exp(-z))
def logistic_activation(X, w):
z = net_input(X, w)
return logistic(z)
print('P(y=1|x) = %.3f' % logistic_activation(X, w)[0])
"""
Explanation: <br>
<br>
Choosing activation functions for feedforward neural networks
...
Logistic function recap
The logistic function, often just called "sigmoid function" is in fact a special case of a sigmoid function.
Net input $z$:
$$z = w_1x_{1} + \dots + w_mx_{m} = \sum_{j=1}^{m} x_{j}w_{j} \ = \mathbf{w}^T\mathbf{x}$$
Logistic activation function:
$$\phi_{logistic}(z) = \frac{1}{1 + e^{-z}}$$
Output range: (0, 1)
End of explanation
"""
# W : array, shape = [n_output_units, n_hidden_units+1]
# Weight matrix for hidden layer -> output layer.
# note that first column (A[:][0] = 1) are the bias units
W = np.array([[1.1, 1.2, 1.3, 0.5],
[0.1, 0.2, 0.4, 0.1],
[0.2, 0.5, 2.1, 1.9]])
# A : array, shape = [n_hidden+1, n_samples]
# Activation of hidden layer.
# note that first element (A[0][0] = 1) is for the bias units
A = np.array([[1.0],
[0.1],
[0.3],
[0.7]])
# Z : array, shape = [n_output_units, n_samples]
# Net input of output layer.
Z = W.dot(A)
y_probas = logistic(Z)
print('Probabilities:\n', y_probas)
y_class = np.argmax(Z, axis=0)
print('predicted class label: %d' % y_class[0])
"""
Explanation: Now, imagine a MLP perceptron with 3 hidden units + 1 bias unit in the hidden unit. The output layer consists of 3 output units.
End of explanation
"""
def softmax(z):
return np.exp(z) / np.sum(np.exp(z))
def softmax_activation(X, w):
z = net_input(X, w)
return softmax(z)
y_probas = softmax(Z)
print('Probabilities:\n', y_probas)
y_probas.sum()
y_class = np.argmax(Z, axis=0)
y_class
"""
Explanation: <br>
<br>
Estimating probabilities in multi-class classification via the softmax function
The softmax function is a generalization of the logistic function and allows us to compute meaningful class-probalities in multi-class settings (multinomial logistic regression).
$$P(y=j|z) =\phi_{softmax}(z) = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}}$$
the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x is:
Output range: (0, 1)
End of explanation
"""
def tanh(z):
e_p = np.exp(z)
e_m = np.exp(-z)
return (e_p - e_m) / (e_p + e_m)
import matplotlib.pyplot as plt
%matplotlib inline
z = np.arange(-5, 5, 0.005)
log_act = logistic(z)
tanh_act = tanh(z)
# alternatives:
# from scipy.special import expit
# log_act = expit(z)
# tanh_act = np.tanh(z)
plt.ylim([-1.5, 1.5])
plt.xlabel('net input $z$')
plt.ylabel('activation $\phi(z)$')
plt.axhline(1, color='black', linestyle='--')
plt.axhline(0.5, color='black', linestyle='--')
plt.axhline(0, color='black', linestyle='--')
plt.axhline(-1, color='black', linestyle='--')
plt.plot(z, tanh_act,
linewidth=2,
color='black',
label='tanh')
plt.plot(z, log_act,
linewidth=2,
color='lightgreen',
label='logistic')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/activation.png', dpi=300)
plt.show()
Image(filename='./images/13_05.png', width=700)
"""
Explanation: <br>
<br>
Broadening the output spectrum using a hyperbolic tangent
Another special case of a sigmoid function, it can be interpreted as a rescaled version of the logistic function.
$$\phi_{tanh}(z) = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$$
Output range: (-1, 1)
End of explanation
"""
import os
import struct
import numpy as np
def load_mnist(path, kind='train'):
"""Load MNIST data from `path`"""
labels_path = os.path.join(path,
'%s-labels-idx1-ubyte'
% kind)
images_path = os.path.join(path,
'%s-images-idx3-ubyte'
% kind)
with open(labels_path, 'rb') as lbpath:
magic, n = struct.unpack('>II',
lbpath.read(8))
labels = np.fromfile(lbpath,
dtype=np.uint8)
with open(images_path, 'rb') as imgpath:
magic, num, rows, cols = struct.unpack(">IIII",
imgpath.read(16))
images = np.fromfile(imgpath,
dtype=np.uint8).reshape(len(labels), 784)
return images, labels
X_train, y_train = load_mnist('mnist', kind='train')
print('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1]))
X_test, y_test = load_mnist('mnist', kind='t10k')
print('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1]))
"""
Explanation: <br>
<br>
Training neural networks efficiently using Keras
Loading MNIST
1) Download the 4 MNIST datasets from http://yann.lecun.com/exdb/mnist/
train-images-idx3-ubyte.gz: training set images (9912422 bytes)
train-labels-idx1-ubyte.gz: training set labels (28881 bytes)
t10k-images-idx3-ubyte.gz: test set images (1648877 bytes)
t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes)
2) Unzip those files
3 Copy the unzipped files to a directory ./mnist
End of explanation
"""
import theano
theano.config.floatX = 'float32'
X_train = X_train.astype(theano.config.floatX)
X_test = X_test.astype(theano.config.floatX)
"""
Explanation: Multi-layer Perceptron in Keras
Once you have Theano installed, Keras can be installed via
pip install Keras
In order to run the following code via GPU, you can execute the Python script that was placed in this directory via
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_keras_mlp.py
End of explanation
"""
from keras.utils import np_utils
print('First 3 labels: ', y_train[:3])
y_train_ohe = np_utils.to_categorical(y_train)
print('\nFirst 3 labels (one-hot):\n', y_train_ohe[:3])
from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import SGD
np.random.seed(1)
model = Sequential()
model.add(Dense(input_dim=X_train.shape[1],
output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(input_dim=50,
output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(input_dim=50,
output_dim=y_train_ohe.shape[1],
init='uniform',
activation='softmax'))
sgd = SGD(lr=0.001, decay=1e-7, momentum=.9)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(X_train, y_train_ohe,
nb_epoch=50,
batch_size=300,
verbose=1,
validation_split=0.1,
show_accuracy=True)
y_train_pred = model.predict_classes(X_train, verbose=0)
print('First 3 predictions: ', y_train_pred[:3])
train_acc = np.sum(y_train == y_train_pred, axis=0) / X_train.shape[0]
print('Training accuracy: %.2f%%' % (train_acc * 100))
y_test_pred = model.predict_classes(X_test, verbose=0)
test_acc = np.sum(y_test == y_test_pred, axis=0) / X_test.shape[0]
print('Test accuracy: %.2f%%' % (test_acc * 100))
"""
Explanation: One-hot encoding of the class variable:
End of explanation
"""
|
chapagain/kaggle-competitions-solution
|
Sentiment Analysis on Movie Reviews/Sentiment-Analysis-on-Movie-Reviews-Logistic-Regression.ipynb
|
mit
|
import nltk
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
"""
Explanation: Sentiment Analysis on Movie Reviews
Using Logistic Regression Model
0 - negative
1 - somewhat negative
2 - neutral
3 - somewhat positive
4 - positive
Load Libraries
End of explanation
"""
train = pd.read_csv('train.tsv', delimiter='\t')
test = pd.read_csv('test.tsv', delimiter='\t')
train.shape, test.shape
train.head()
test.head()
# unique sentiment labels
train.Sentiment.unique()
train.info()
train.Sentiment.value_counts()
train.Sentiment.value_counts() / train.Sentiment.count()
"""
Explanation: Load & Read Datasets
End of explanation
"""
X_train = train['Phrase']
y_train = train['Sentiment']
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression())
])
text_clf = text_clf.fit(X_train, y_train)
X_test = train['Phrase']
predicted = text_clf.predict(X_test)
print (np.mean(predicted == y_train))
test.info()
"""
Explanation: Train Classifier
End of explanation
"""
X_test = test['Phrase']
phraseIds = test['PhraseId']
predicted = text_clf.predict(X_test)
output = pd.DataFrame( data={"PhraseId":phraseIds, "Sentiment":predicted} )
#output.to_csv( "submission_logistic_regression.csv", index=False, quoting=3 )
"""
Explanation: Create Submission
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.15/_downloads/plot_read_and_write_raw_data.ipynb
|
bsd-3-clause
|
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(fname)
# Set up pick list: MEG + STI 014 - bad channels
want_meg = True
want_eeg = False
want_stim = False
include = ['STI 014']
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bad channels + 2 more
picks = mne.pick_types(raw.info, meg=want_meg, eeg=want_eeg, stim=want_stim,
include=include, exclude='bads')
some_picks = picks[:5] # take 5 first
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
data, times = raw[some_picks, start:(stop + 1)]
# save 150s of MEG data in FIF file
raw.save('sample_audvis_meg_raw.fif', tmin=0, tmax=150, picks=picks,
overwrite=True)
"""
Explanation: Reading and writing raw files
In this example, we read a raw file. Plot a segment of MEG data
restricted to MEG channels. And save these data in a new
raw file.
End of explanation
"""
raw.plot()
"""
Explanation: Show MEG data
End of explanation
"""
|
jdhp-docs/python_notebooks
|
nb_sci_maths/maths_stats_chi_squared_min_fr.ipynb
|
mit
|
n = 100
p = 0.25
data = np.random.binomial(n=n, p=p, size=100000)
plt.hist(data,
bins=np.linspace(data.min(), data.max(), data.max() - data.min() + 1));
"""
Explanation: Minimisation du $\chi^2$
Chi-squared test
To see:
- http://hamelg.blogspot.fr/2015/11/python-for-data-analysis-part-25-chi.html
- https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.chisquare.html
- https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.stats.chisquare.html
- https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.stats.chi2.html
https://stats.stackexchange.com/questions/202617/implementing-chi-square-in-python-and-testing-on-scipys-poisson-and-norm-variat
https://python4mpia.github.io/fitting_data/least-squares-fitting.html
http://www.xavierdupre.fr/app/ensae_teaching_cs/helpsphinx2/notebooks/tableau_contingence.html
http://astropython.blogspot.fr/2012/02/computing-chi-squared-and-reduced-chi.html
http://connor-johnson.com/2014/12/31/the-pearson-chi-squared-test-with-python-and-r/
http://machinelearningmastery.com/feature-selection-machine-learning-python/
http://glowingpython.blogspot.fr/2014/02/terms-selection-with-chi-square.html
https://stackoverflow.com/questions/22177576/python-minimizing-chi-squared
https://stackoverflow.com/questions/39486270/scipy-optimize-minimize-chi-squared-python
https://stats.stackexchange.com/questions/139108/optimizing-parameter-estimates-by-minimizing-chi2-in-iterative-procedure
https://lmfit.github.io/lmfit-py/fitting.html
https://www.youtube.com/watch?v=Awv_DqwEIxY
TODO:
- examples on how to minimize chi-squared to find distribution parameters
Problèmatique:
On suppose qu'on a un échantillon et que la loi de probabilité $L_{v}$ qui à généré cet échantillon est inconnue.
Comment retrouver cette loi et ses paramètres à partir de l'échantillon dont on dispose ?
À partir de l'échantillon, on peut calculer une loi de probabilité empirique $L_e$ ("e" pour empirique) ayant une fonction de répartition $F_e$.
À partir de cette loi empirique (et d'éventuelles autres informations sur $L_v$), on choisi parmi les lois usuelles celles qui lui ressemble le plus.
Supposons qu'on choisi une certaine loi $L_h$ ("h" pour hypothèse) de fonction de répartition $F_h$.
On pourra valider ce choix si la distance $d(F_h, F_e)$ entre les fonctions de répartition $F_h$ et $F_e$ est faible, i.e. si $d(F_h, F_e) \lt C$.
On pourrait cependant se tromper en rejetant notre choix alors qu'il est bien correct. Cette erreur se produit avec une probabilité qui est $P_{err} = P \left{ d(F_h, F_e) \gt C \right}$.
Si on veut que le risque d'erreur soit faible, on peut fixer $P_{err}$ (par exemple $P_{err} \lt 0.1$ (TODO: "=") pour un risque inférieur à (TODO: "égale à") 1%).
Le choix de $P_{err}$ nous permet de déterminer la valeur de la constante $C$ à utiliser ($C$ est appellée seuil d'acceptation ou seuil critique).
TODO...
On réalise ainsi un test d'adéquation (ou d'ajustement) entre une loi théorique donnée (ici définie par $F_h$) et une loi empirique issue d'un échantillon observé.
Ok, reste à définir cette distance $d$, i.e. le test utilisé
pour les lois discrètes (TODO: "si $L_h$ est discrète"), on utilise le test du chi-deux ($\chi^2$)
pour les lois continues, on utilise plutôt le test de Kolmogorov-Smirnov
$$d(F_h, F_e) = \sum^{k}{i=1}\frac{(n{ei} - n_{hi})^2}{n_{hi}}$$
où :
$n_{ei}$ est l'effectif observé de $x_i$ (le nombre d'observation de $x_i$)
$n_{hi}$ est l'effectif théorique de $x_i$ avec $n_{hi} = np_i$.
Exemple pour un échantillon de 100 tirages à pile ou face
\begin{array}{|c|c|c|}
\hline
x_i & \text{pile} & \text{face} \
\hline
n_{ei} & 47 & 53 \
n_{hi} & 50 & 50 \
\hline
\end{array}
| $x_i$ | pile | face |
| -------- | ---- | ---- |
| $n_{ei}$ | 47 | 53 |
| $n_{hi}$ | 50 | 50 |
$$d(F_h, F_e) = \frac{(47 - 50)^2}{50} + \frac{(53 - 50)^2}{50} = 0.36$$
Exemple
Définition de la loi de probablité supposée inconnue
La loi de probabilité à retrouver est une loi binomiale $\mathcal{b}(100, 0.25)$:
End of explanation
"""
import numpy as np
k = 100 # taille de l'echantillon
echantillon = np.random.binomial(n=n, p=p, size=k)
#np.random.normal(loc=m, scale=sigma, size=k)
"""
Explanation: $E(X) = np = 25$
$V(X) = np(1-p) = 18.75$
$STD(X) = \sqrt{18.75} \simeq 4.33$
Définition de l'échantillon disponible
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(echantillon,
bins=np.linspace(echantillon.min(), echantillon.max(), echantillon.max() - echantillon.min() + 1));
plt.hist(echantillon,
bins=np.linspace(echantillon.min(), echantillon.max(), echantillon.max() - echantillon.min() + 1),
cumulative=True);
#print("moyenne:", m)
#print("écart type:", sigma)
print("moyenne empirique de l'échantillon:", echantillon.mean())
print("écart type empirique de l'échantillon:", echantillon.std())
"""
Explanation: Distribution empirique
End of explanation
"""
def dist_chi2(x, *param):
n = x[0]
p = x[1]
dist = 0
n_xi = 10 # TODO
for xi in range(n_xi):
n_ei = 0 # TODO
n_hi = 0 # TODO
dist += ((n_ei - n_hi)**2) / n_hi
return dist
from scipy import optimize
n_slice = slice(1., 200., 1.)
p_slice = slice(0.1, 1.0, 0.1)
search_ranges = (n_slice, p_slice)
#res = optimize.brute(dist_chi2,
# search_ranges,
# #args=params,
# full_output=True,
# finish=optimize.fmin)
#print("x* =", res[0])
#print("f(x*) =", res[1])
"""
Explanation: Définition et test des hypothèses
Minimize $d$
End of explanation
"""
|
Kismuz/btgym
|
examples/portfolio_setup_BETA.ipynb
|
lgpl-3.0
|
from logbook import INFO, WARNING, DEBUG
import warnings
warnings.filterwarnings("ignore") # suppress h5py deprecation warning
import numpy as np
import os
import backtrader as bt
from btgym.research.casual_conv.strategy import CasualConvStrategyMulti
from btgym.research.casual_conv.networks import conv_1d_casual_attention_encoder
from btgym.algorithms.policy import StackedLstmPolicy
from btgym.algorithms.launcher.base import Launcher
from btgym.algorithms.aac import A3C
from btgym import PortfolioEnv
from btgym.datafeed.casual import BTgymCasualDataDomain
from btgym.datafeed.multi import BTgymMultiData
"""
Explanation: BETA Continiuous actions multi-data, multi-asset setup intro.
End of explanation
"""
engine = bt.Cerebro()
num_features = 16
engine.addstrategy(
CasualConvStrategyMulti,
cash_name='EUR',
start_cash=2000,
commission=0.0001,
leverage=10.0,
asset_names={'USD', 'CHF', 'GBP', 'JPY'},
drawdown_call=10, # max % to loose, in percent of initial cash
target_call=10, # max % to win, same
skip_frame=10,
gamma=0.99,
state_ext_scale = {
'USD': np.linspace(1, 2, num=num_features),
'GBP': np.linspace(1, 2, num=num_features),
'CHF': np.linspace(1, 2, num=num_features),
'JPY': np.linspace(5e-3, 1e-2, num=num_features),
},
cwt_signal_scale=4e3,
cwt_lower_bound=4.0, # CWT scales TODO: 8.? (was : 3.)
cwt_upper_bound=90.0,
reward_scale=7,
)
data_config = {
'USD': {'filename': './data/DAT_ASCII_EURUSD_M1_2017.csv'},
'GBP': {'filename': './data/DAT_ASCII_EURGBP_M1_2017.csv'},
'JPY': {'filename': './data/DAT_ASCII_EURJPY_M1_2017.csv'},
'CHF': {'filename': './data/DAT_ASCII_EURCHF_M1_2017.csv'},
}
dataset = BTgymMultiData(
data_class_ref=BTgymCasualDataDomain,
data_config=data_config,
trial_params=dict(
start_weekdays={0, 1, 2, 3, 4, 5, 6},
sample_duration={'days': 30, 'hours': 0, 'minutes': 0},
start_00=False,
time_gap={'days': 15, 'hours': 0},
test_period={'days': 7, 'hours': 0, 'minutes': 0},
expanding=True,
),
episode_params=dict(
start_weekdays={0, 1, 2, 3, 4, 5, 6},
sample_duration={'days': 2, 'hours': 23, 'minutes': 55},
start_00=False,
time_gap={'days': 2, 'hours': 15},
),
frozen_time_split={'year': 2017, 'month': 3, 'day': 1},
)
#########################
env_config = dict(
class_ref=PortfolioEnv,
kwargs=dict(
dataset=dataset,
engine=engine,
render_modes=['episode'],
render_state_as_image=True,
render_size_episode=(12,16),
render_size_human=(9, 4),
render_size_state=(11, 3),
render_dpi=75,
port=5000,
data_port=4999,
connect_timeout=90,
verbose=0,
)
)
cluster_config = dict(
host='127.0.0.1',
port=12230,
num_workers=4, # Set according CPU's available or so
num_ps=1,
num_envs=1,
log_dir=os.path.expanduser('~/tmp/multi_continiuous'),
)
policy_config = dict(
class_ref=StackedLstmPolicy,
kwargs={
'action_dp_alpha': 200, # Dirichlet process sampling inverse variance
'lstm_layers': (256, 256),
'dropout_keep_prob': 1.0,
'encode_internal_state': False,
'conv_1d_num_filters': 64,
'share_encoder_params': False,
'state_encoder_class_ref': conv_1d_casual_attention_encoder,
}
)
trainer_config = dict(
class_ref=A3C,
kwargs=dict(
opt_learn_rate=1e-4,
opt_end_learn_rate=1e-5,
opt_decay_steps=50*10**6,
model_gamma=0.99,
model_gae_lambda=1.0,
model_beta=0.001, # entropy reg
rollout_length=20,
time_flat=True,
model_summary_freq=10,
episode_summary_freq=1,
env_render_freq=5,
)
)
"""
Explanation: Problem formulation
This setup closely relates to continuous portfolio optimisation problem definition;
it differs from discrete actions setup in:
1. base broker actions are real numbers: a[i] in [0,1], 0<=i<=K, SUM{a[i]} = 1 for K risky assets added;
each action is a market target order to adjust portfolio to get share a[i]*100% for i-th asset;
2. entire single-step broker action is dictionary of form:
{cash_name: a[0], asset_name_1: a[1], ..., asset_name_K: a[K]};
3. short selling is not permitted;
Model
For RL it implies having continuous action space as K+1 dim vector.
Environment setup
0. Problem definition.
Again consider setup with one riskless asset acting as broker account cash and K risky assets.
For every risky asset there exists track of historic price records referred as `data-line`.
Apart from assets data lines there possibly exists number of exogenous data lines.
It is supposed for this setup that:
i. there is no interest rate for base (riskless) asset;
ii. short selling is not permitted;
iii. transaction costs are modelled via broker commission;
iv. 'market liquidity' and 'capital impact' assumptions are met;
v. time indexes match for all data lines provided;
1. Assets and datalines.
This environment expects Dataset to be instance of `btgym.datafeed.multi.BTgymMultiData`, which sets
number, specifications and sampling synchronisation for historic data for all assets and data lines.
Namely, one should define data_config dictionary of `data lines` and list of `assets`.
`data_config` specifies all data sources used by strategy,
while `assets` defines subset of `data lines` which is supposed to hold historic data
for risky portfolio assets.
Internally every episodic asset data is converted to single bt.feed and added to environment strategy
as separate named data_line (see backtrader docs for extensive explanation of data_lines concept).
Every non-asset data line as also added as bt.feed with difference that it is not 'tradable' i.e.
it is impossible to issue trade orders on such line.
Strategy is expected to properly handle all received data-lines.
Example::
1. Four data streams added via Dataset.data_config,
portfolio consists of four assets, added via strategy_params, cash is EUR:
data_config = {
'usd': {'filename': '.../DAT_ASCII_EURUSD_M1_2017.csv'},
'gbp': {'filename': '.../DAT_ASCII_EURGBP_M1_2017.csv'},
'jpy': {'filename': '.../DAT_ASCII_EURJPY_M1_2017.csv'},
'chf': {'filename': '.../DAT_ASCII_EURCHF_M1_2017.csv'},
}
cash_name = 'eur'
assets_names = ['usd', 'gbp', 'jpy', 'chf']
2. Three streams added, only two of them form portfolio; DXY stream is `decision-making` only:
data_config = {
'usd': {'filename': '.../DAT_ASCII_EURUSD_M1_2017.csv'},
'gbp': {'filename': '.../DAT_ASCII_EURGBP_M1_2017.csv'},
'DXY': {'filename': '.../DAT_ASCII_DXY_M1_2017.csv'},
}
cash_name = 'eur'
assets_names = ['usd', 'gbp']
2. btgym.spaces.ActionDictSpace and order execution.
ActionDictSpace is an extension of OpenAI Gym DictSpace providing domain-specific functionality.
Strategy expects to receive separate action for every K+1 asset in form of dictionary:
`{cash_name: a[0], asset_name_1: a[1], ..., asset_name_K: a[K]}` for K risky assets added,
where base actions are real numbers: `a[i] in [0,1], 0<=i<=K, SUM{a[i]} = 1`. Whole action should be
interpreted as order to adjust portfolio to have share `a[i] * 100% for i-th asset`.
Therefore, base actions are gym.spaces.Box and for K assets environment action space will be a shallow
DictSpace of K+1 continuous spaces: `{cash_name: gym.spaces.Box(low=0, high=1),
asset_name_1: gym.spaces.Box(low=0, high=1), ..., asset_name_K: gym.spaces.Box(low=0, high=1)}`
3. TODO: refine order execution control,
see: https://community.backtrader.com/topic/152/multi-asset-ranking-and-rebalancing/2?page=1
Reference:
MultiDataFeed: https://kismuz.github.io/btgym/btgym.datafeed.html#btgym.datafeed.multi.BTgymMultiData
ActionSpace: https://kismuz.github.io/btgym/btgym.html#btgym.spaces.ActionDictSpace
MultiDiscreteEnv: https://kismuz.github.io/btgym/btgym.envs.html#btgym.envs.multidiscrete.MultiDiscreteEnv
PortfolioEnv: https://kismuz.github.io/btgym/btgym.envs.html#btgym.envs.portfolio.PortfolioEnv
End of explanation
"""
env = PortfolioEnv(**env_config['kwargs'])
o = env.reset()
# Should manually normalize actions as we see it as sum of assets shares (or prob. distribution),
# but do not pose this constraint to action_space:
a = env.action_space.sample()
norm = np.asarray(list(a.values())).sum()
a_norm = {k: v/norm for k, v in a.items()}
for k, v in a_norm.items(): print('{}: {:1.2f}'.format(k, v[0]))
# NOTE: to see actual order execution feedback via `info` part one need to set strategy param: skip_frame=2,
o, r, d, i = env.step(a_norm)
for k, v in i[0].items(): print('{}: {}'.format(k, v))
env.close()
"""
Explanation: First, one can manually play with environment:
End of explanation
"""
launcher = Launcher(
cluster_config=cluster_config,
env_config=env_config,
trainer_config=trainer_config,
policy_config=policy_config,
test_mode=False,
max_env_steps=100*10**6,
root_random_seed=0,
purge_previous=1, # ask to override previously saved model and logs
verbose=0
)
# Train it:
launcher.run()
"""
Explanation: Run training (do not expect it to converge though):
End of explanation
"""
|
ihmeuw/dismod_mr
|
examples/expert_prior_explorer.ipynb
|
agpl-3.0
|
# if dismod_mr is not installed, it should possible to use
# !conda install --yes pymc
# !pip install dismod_mr
import dismod_mr
"""
Explanation: Expert priors in DisMod-MR
Take a look at some of the expert priors for the age-specific rate model in DisMod-MR.
End of explanation
"""
from IPython.core.pylabtools import figsize
figsize(11, 5.5)
dm = dismod_mr.data.ModelData()
dm.set_level_bounds('p', lower=0, upper=.1)
dm.setup_model('p')
%time dm.fit(how='mcmc', iter=20_000, burn=10_000, thin=10)
dm.plot()
dm = dismod_mr.data.ModelData()
dm.set_level_bounds('p', lower=0, upper=.1)
dm.set_knots('p', [0,100])
dm.setup_model('p')
%time dm.fit(how='mcmc', iter=20_000, burn=10_000, thin=10)
dm.plot()
dm = dismod_mr.data.ModelData()
dm.set_level_bounds('p', lower=0, upper=1)
dm.set_level_value('p', age_before=1, age_after=100, value=.2)
dm.set_knots('p', [0, 50, 100])
dm.setup_model('p')
%time dm.fit(how='mcmc', iter=20_000, burn=10_000, thin=10)
dm.plot()
dm = dismod_mr.data.ModelData()
dm.set_level_bounds('p', lower=0, upper=1)
dm.set_knots('p', [0, 50, 100])
dm.set_increasing('p', age_start=0, age_end=100) # FIXME: seems like this isn't working
dm.setup_model('p')
%time dm.fit(how='mcmc', iter=20_000, burn=10_000, thin=10)
dm.plot()
!date
"""
Explanation: Fit with no data
Let's start with a model with no data, to see the default prior.
End of explanation
"""
|
google-coral/tutorials
|
run_colab_on_devboard.ipynb
|
apache-2.0
|
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License")
End of explanation
"""
! sudo apt-get update
! sudo apt-get install edgetpu-examples
"""
Explanation: Run Colab on a Coral Dev Board
This tutorial shows you how to run Jupyter notebooks on your Coral Dev Board, and then edit and execute them from your connected computer using the Google Colab interface.
By the end of this tutorial, you'll connect this notebook to your Dev Board and run an inference on the board using code from this page.
<a href="https://colab.research.google.com/github/google-coral/tutorials/blob/master/run_colab_on_devboard.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"></a>
<a href="https://github.com/google-coral/tutorials/blob/master/run_colab_on_devboard.ipynb" target="_parent"><img src="https://img.shields.io/static/v1?logo=GitHub&label=&color=333333&style=flat&message=View%20on%20GitHub" alt="View in GitHub"></a>
Requirements
You need a Coral Dev Board and it should be set up with the latest software.
If you run cat /etc/mendel_version on the Dev Board, it should print 4.0 or higher.
The computer you're using to read this tutorial must have MDT installed.
Install Jupyter
First you need to install Jupyter on your Dev Board as follows:
Open a shell on your Dev Board:
bash
mdt shell
Install the Python 3 development tools on the board:
```bash
sudo apt-get update
sudo apt-get install python3-dev
```
Install Jupyter:
bash
pip3 install jupyter
Reload the .profile file to access Jupyter's location (at ~/.local/bin):
source $HOME/.profile
Verify it works by running jupyter --version.
If this prints command not found, then your Mendel version might be out of date or the Jupyter installation failed.
Start Jupyter on the Dev Board
Before starting Jupyter on the Dev Board, you need to install jupyter_http_over_ws, which enables your host computer to communicate with Jupyter via WebSocket:
```bash
pip3 install jupyter_http_over_ws
jupyter serverextension enable --py jupyter_http_over_ws
```
Once that's done, start Jupyter as follows:
bash
jupyter notebook \
--NotebookApp.allow_origin='https://colab.research.google.com' \
--port=8888 \
--NotebookApp.port_retries=0
This outputs several messages, including a URL such as http://localhost:8888/?token=.... You'll
need to come back and copy this token later.
Be sure you leave this terminal open because it's now running the Jupyter notebook.
Start SSH port forwarding
Although Jupyter is now running, it's only accessible from the Dev Board. To access it from the host computer, you need to forward your localhost traffic to the Dev Board with an SSH tunnel. The effect is to make Google Colab think Jupyter notebook is running on your host computer even though it's actually running on the Dev Board.
Open a new terminal on your host computer (where you run mdt shell) and run this command:
bash
ssh -N -L 8888:localhost:8888 mendel@192.168.100.2 -i ~/.config/mdt/keys/mdt.key
If asked, Are you sure you want to continue connecting?, type yes.
If you see a warning that says REMOTE HOST IDENTIFICATION HAS CHANGED, then you need to open the known_hosts file and remove the line specifying the key for 192.168.100.2. Then try again.
Now leave this window active because this command is what keeps the SSH tunnel open between the Dev Board and the host computer.
Connect this Colab to the local runtime
Now login to the Jupyter notebook and connect this Colab:
Open http://localhost:8888 in a browser on the host computer.
You should see a Jupyter notebook page asking for "Password or token" at the top. Paste the token printed by the jupyter notebook command above (just the part following ?token=), and click Log in.
Now, on this tutorial page's Colab interface, click the drop-down arrow next to Connect (near the top-right corner of the web page) and click Connect to local runtime.
Use 8888 as the port and click Connect.
The button should now say "Connected (Local)".
This notebook (the page you're reading) is now connected to your Dev Board and you can run code from this page on the board!
Run notebook code on the Dev Board
You can execute the code blocks below one at a time or run them all by selecting Runtime > Run all in the toolbar.
First you need to download pre-compiled models and data from the Edge TPU API examples:
End of explanation
"""
from edgetpu.classification.engine import ClassificationEngine
from edgetpu.utils import dataset_utils
from PIL import Image
# Prepare labels.
labels = dataset_utils.read_label_file('/usr/share/edgetpu/examples/models/inat_bird_labels.txt')
# Initialize engine.
engine = ClassificationEngine('/usr/share/edgetpu/examples/models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite')
# Run inference.
img = Image.open('/usr/share/edgetpu/examples/images/parrot.jpg')
for result in engine.classify_with_image(img, top_k=3):
print('---------------------------')
print(labels[result[0]])
print('Score : ', result[1])
"""
Explanation: Then you can run an inference with this code:
End of explanation
"""
|
xpharry/Udacity-DLFoudation
|
tutorials/sentiment_network/.ipynb_checkpoints/Sentiment Classification - How to Best Frame a Problem for a Neural Network - -checkpoint.ipynb
|
mit
|
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r')
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r')
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
import numpy as np
from collections import Counter
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 10):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
"""
Explanation: It Starts with a Dataset
End of explanation
"""
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
"""
Explanation: Transforming Text to Numbers
Example Predictions
End of explanation
"""
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
"""
Explanation: Creating the Input Data
End of explanation
"""
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
"""
Explanation: And now we can initialize our (empty) input layer as vector of 0s. We'll modify it later by putting "1"s in various positions.
End of explanation
"""
from IPython.display import Image
Image(filename='sentiment_network.png')
"""
Explanation: And now we want to create a function that will set our layer_0 list to the correct sequence of 1s and 0s based on a single review. Now if you remember our picture before, you might have noticed something. Each word had a specific place in the input of our network.
End of explanation
"""
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
"""
Explanation: In order to create a function that can update our layer_0 variable based on a review, we have to decide which spots in our layer_0 vector (list of numbers) correlate with each word. Truth be told, it doesn't matter which ones we choose, only that we pick spots for each word and stick with them. Let's decide those positions now and store them in a python dictionary called "word2index".
End of explanation
"""
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] = 1
update_input_layer(reviews[0])
layer_0
"""
Explanation: ...and now we can use this new "word2index" dictionary to populate our input layer with the right 1s in the right places.
End of explanation
"""
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
get_target_for_label(labels[0])
get_target_for_label(labels[1])
"""
Explanation: Creating the Target Data
End of explanation
"""
from IPython.display import Image
Image(filename='sentiment_network_2.png')
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
np.random.seed(1)
self.pre_process_data()
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000])
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
# evaluate the model after training
mlp.test(reviews[-1000:],labels[-1000:])
mlp.run("That movie was great")
"""
Explanation: Putting it all together in a Neural Network
End of explanation
"""
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_1 = layer_0.dot(weights_0_1)
layer_1
Image(filename='sentiment_network_sparse.png')
"""
Explanation: Making our Network Train and Run Faster
End of explanation
"""
Image(filename='sentiment_network_sparse_2.png')
"""
Explanation: First Inefficiency: "0" neurons waste computation
End of explanation
"""
#inefficient thing we did before
layer_1 = layer_0.dot(weights_0_1)
layer_1
# new, less expensive lookup table version
layer_1 = weights_0_1[4] + weights_0_1[9]
layer_1
"""
Explanation: Second Inefficiency: "1" neurons don't need to multiply!
The Solution: Create layer_1 by adding the vectors for each word.
End of explanation
"""
import time
import sys
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
np.random.seed(1)
self.pre_process_data(reviews)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self,reviews):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
self.layer_1 = np.zeros((1,hidden_nodes))
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def train(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
# Hidden layer
# layer_1 = self.layer_0.dot(self.weights_0_1)
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
# Hidden layer
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],learning_rate=0.01)
# train the network
mlp_full.train(reviews[:-1000],labels[:-1000])
# evaluate our model before training (just to show how horrible it is)
mlp_full.test(reviews[-1000:],labels[-1000:])
"""
Explanation: See how they generate exactly the same value? Let's update our new neural network to do this.
End of explanation
"""
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
np.random.seed(1)
self.pre_process_data(reviews, polarity_cutoff, min_count)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self,reviews, polarity_cutoff,min_count):
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
review_vocab = set()
for review in reviews:
for word in review.split(" "):
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
self.layer_1 = np.zeros((1,hidden_nodes))
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def train(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
# Hidden layer
# layer_1 = self.layer_0.dot(self.weights_0_1)
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
if(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
# Hidden layer
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
"""
Explanation: Making Learning Faster & Easier by Reducing Noise
End of explanation
"""
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#"+colors.rgb2hex([0,min(255,pos_neg_ratios[word] * 1),0])[3:])
else:
neg+=1
colors_list.append("#000000")
# colors_list.append("#"+colors.rgb2hex([0,0,min(255,pos_neg_ratios[word] * 1)])[3:])
len(vectors_list)
len(colors_list)
# from sklearn.manifold import TSNE
# tsne = TSNE(n_components=2, random_state=0)
# words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize))
p.scatter(x="x1", y="x2", size=8, source=source,color=colors_list)
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
# p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
"""
Explanation: What's Going On in the Weights?
End of explanation
"""
|
Kaggle/learntools
|
notebooks/embeddings/raw/3-gensim.ipynb
|
apache-2.0
|
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import tensorflow as tf
from tensorflow import keras
#_RM_
input_dir = '../input/movielens_preprocessed'
#_UNCOMMENT_
#input_dir = '../input/movielens-preprocessing'
#_RM_
model_dir = '.'
#_UNCOMMENT_
#model_dir = '../input/movielens-spiffy-model'
model_path = os.path.join(model_dir, 'movie_svd_model_32.h5')
model = keras.models.load_model(model_path)
"""
Explanation: Earlier we trained a model to predict the ratings users would give to movies using a network with embeddings learned for each movie and user. Embeddings are powerful! But how do they actually work?
Previously, I claimed that embeddings capture the 'meaning' of the objects they represent, and discover useful latent structure. Let's put that to the test!
Looking up embeddings
Let's load a model we trained earlier so we can investigate the embedding weights that it learned.
End of explanation
"""
emb_layer = model.get_layer('movie_embedding')
(w,) = emb_layer.get_weights()
w.shape
"""
Explanation: The embedding weights are part of the model's internals, so we'll have to do a bit of digging around to access them. We'll grab the layer responsible for embedding movies, and use the get_weights() method to get its learned weights.
End of explanation
"""
w[0]
"""
Explanation: Our weight matrix has 26,744 rows for that many movies. Each row is 32 numbers - the size of our movie embeddings.
Let's look at an example movie vector:
End of explanation
"""
movies_path = os.path.join(input_dir, 'movie.csv')
movies_df = pd.read_csv(movies_path, index_col=0)
movies_df.head()
"""
Explanation: What movie is this the embedding of? Let's load up our dataframe of movie metadata.
End of explanation
"""
i_toy_story = 0
i_shrek = movies_df.loc[
movies_df.title == 'Shrek',
'movieId'
].iloc[0]
toy_story_vec = w[i_toy_story]
shrek_vec = w[i_shrek]
print(
toy_story_vec,
shrek_vec,
sep='\n',
)
"""
Explanation: Of course, it's Toy Story! I should have recognized that vector anywhere.
Okay, I'm being facetious. It's hard to make anything of these vectors at this point. We never directed the model about how to use any particular embedding dimension. We left it alone to learn whatever representation it found useful.
So how do we check whether these representations are sane and coherent?
Vector similarity
A simple way to test this is to look at how close or distant pairs of movies are in the embedding space. Embeddings can be thought of as a smart distance metric. If our embedding matrix is any good, it should map similar movies (like Toy Story and Shrek) to similar vectors.
End of explanation
"""
from scipy.spatial import distance
distance.euclidean(toy_story_vec, shrek_vec)
"""
Explanation: Comparing dimension-by-dimension, these look vaguely similar. If we wanted to assign a single number to their similarity, we could calculate the euclidean distance between these two vectors. (This is our conventional 'as the crow flies' notion of distance between two points. Easy to grok in 1, 2, or 3 dimensions. Mathematically, we can also extend it to 32 dimensions, though good luck visualizing it.)
End of explanation
"""
i_exorcist = movies_df.loc[
movies_df.title == 'The Exorcist',
'movieId'
].iloc[0]
exorcist_vec = w[i_exorcist]
distance.euclidean(toy_story_vec, exorcist_vec)
"""
Explanation: How does this compare to a pair of movies that we would think of as very different?
End of explanation
"""
print(
distance.cosine(toy_story_vec, shrek_vec),
distance.cosine(toy_story_vec, exorcist_vec),
sep='\n'
)
"""
Explanation: As expected, much further apart.
Cosine Distance
If you check out the docs for the scipy.spatial module, you'll see there are actually a lot of different measures of distance that people use for different tasks.
When judging the similarity of embeddings, it's more common to use cosine similarity.
In brief, the cosine similarity of two vectors ranges from -1 to 1, and is a function of the angle between the vectors. If two vectors point in the same direction, their cosine similarity is 1. If they point in opposite directions, it's -1. If they're orthogonal (i.e. at right angles), their cosine similarity is 0.
Cosine distance is just defined as 1 minus the cosine similarity (and therefore ranges from 0 to 2).
Let's calculate a couple cosine distances between movie vectors:
End of explanation
"""
from gensim.models.keyedvectors import WordEmbeddingsKeyedVectors
# Limit to movies with at least this many ratings in the dataset
threshold = 100
mainstream_movies = movies_df[movies_df.n_ratings >= threshold].reset_index(drop=True)
movie_embedding_size = w.shape[1]
kv = WordEmbeddingsKeyedVectors(movie_embedding_size)
kv.add(
mainstream_movies['key'].values,
w[mainstream_movies.movieId]
)
"""
Explanation: Aside: Why is cosine distance commonly used when working with embeddings? The short answer, as with so many deep learning techniques, is "empirically, it works well". In the exercise coming up, you'll get to do a little hands-on investigation that digs into this question more deeply.
Which movies are most similar to Toy Story? Which movies fall right between Psycho and Scream in the embedding space? We could write a bunch of code to work out questions like this, but it'd be pretty tedious. Fortunately, there's already a library for exactly this sort of work: Gensim.
Exploring embeddings with Gensim
I'll instantiate an instance of WordEmbeddingsKeyedVectors with our model's movie embeddings and the titles of the corresponding movies.
Aside: You may notice that Gensim's docs and many of its class and method names refer to word embeddings. While the library is most frequently used in the text domain, we can use it to explore embeddings of any sort.
End of explanation
"""
kv.most_similar('Toy Story')
"""
Explanation: Okay, so which movies are most similar to Toy Story?
End of explanation
"""
#$HIDE_INPUT$
import textwrap
movies = ['Eyes Wide Shut', 'American Pie', 'Iron Man 3', 'West Side Story',
'Battleship Potemkin', 'Clueless'
]
def plot_most_similar(movie, ax, topn=5):
sim = kv.most_similar(movie, topn=topn)[::-1]
y = np.arange(len(sim))
w = [t[1] for t in sim]
ax.barh(y, w)
left = min(.6, min(w))
ax.set_xlim(right=1.0, left=left)
# Split long titles over multiple lines
labels = [textwrap.fill(t[0] , width=24)
for t in sim]
ax.set_yticks(y)
ax.set_yticklabels(labels)
ax.set_title(movie)
fig, axes = plt.subplots(3, 2, figsize=(15, 9))
for movie, ax in zip(movies, axes.flatten()):
plot_most_similar(movie, ax)
fig.tight_layout()
"""
Explanation: Wow, these are pretty great! It makes perfect sense that Toy Story 2 is the most similar movie to Toy Story. And most of the rest are animated kids movies with a similar computer-animated style.
So it's learned something about 3-d animated kids flick, but maybe that was just a fluke. Let's look at the closest neighbours for a few more movies from a variety of genres:
End of explanation
"""
kv.most_similar(
positive = ['Scream'],
negative = ['Psycho (1960)']
)
"""
Explanation: Artsy erotic dramas, raunchy sophomoric comedies, old-school musicals, superhero movies... our embeddings manage to nail a wide variety of cinematic niches!
Semantic vector math
The most_similar method optionally takes a second argument, negative. If we call kv.most_similar(a, b), then instead of finding the vector closest to a, it will find the closest vector to a - b.
Why would you want to do that? It turns out that doing addition and subtraction of embedding vectors often gives surprisingly meaningful results. For example, how would you fill in the following equation?
Scream = Psycho + ________
Scream and Psycho are similar in that they're violent, scary movies somewhere on the border between Horror and Thriller. The biggest difference is that Scream has elements of comedy. So I'd say Scream is what you'd get if you combined Psycho with a comedy.
But we can actually ask Gensim to fill in the blank for us via vector math (after some rearranging):
________ = Scream - Psycho
End of explanation
"""
kv.most_similar(
['Pocahontas', 'Cars 2'],
negative = ['Brave']
)
"""
Explanation: If you are familiar with these movies, you'll see that the missing ingredient that takes us from Psycho to Scream is comedy (and also late-90's-teen-movie-ness).
Analogy solving
The SAT test which is used to get into American colleges and universities poses analogy questions like:
shower : deluge :: _____ : stare
(Read "shower is to deluge as ___ is to stare")
To solve this, we find the relationship between deluge and shower, and apply it to stare. A shower is a milder form of a deluge. What's a milder form of stare? A good answer here would be "glance", or "look".
It's kind of astounding that this works, but people have found that these can often be effectively solved by simple vector math on word embeddings. Can we solve movie analogies with our embeddings? Let's try. What about:
Brave : Cars 2 :: Pocahontas : _____
The answer is not clear. One interpretation would be that Brave is like Cars 2, except that the latter is aimed primarily at boys, and the former might be more appealing to girls, given its female protagonist. So maybe the answer should be, like Pocahontas, a mid-90's conventional animation kids movie, but more of a 'boy movie'. Hercules? The Lion King?
Let's ask our embeddings what they think.
In terms of vector math, we can frame this as...
Cars 2 = Brave + X
_____ = Pocahontas + X
Rearranging, we get:
____ = Pocahontas + (Cars 2 - Brave)
We can solve this by passing in two movies (Pocahontas and Cars 2) for the positive argument to most_similar, with Brave as the negative argument:
End of explanation
"""
|
ethen8181/machine-learning
|
recsys/calibration/calibrated_reco.ipynb
|
mit
|
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style='custom2.css', plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import os
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.sparse import csr_matrix
from implicit.bpr import BayesianPersonalizedRanking
from implicit.evaluation import train_test_split, precision_at_k
%watermark -a 'Ethen' -d -t -v -p scipy,numpy,pandas,matplotlib,implicit
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Calibrated-Recommendations" data-toc-modified-id="Calibrated-Recommendations-1"><span class="toc-item-num">1 </span>Calibrated Recommendations</a></span><ul class="toc-item"><li><span><a href="#Preparation" data-toc-modified-id="Preparation-1.1"><span class="toc-item-num">1.1 </span>Preparation</a></span></li><li><span><a href="#Deep-Dive-Into-Calibrated-Recommendation" data-toc-modified-id="Deep-Dive-Into-Calibrated-Recommendation-1.2"><span class="toc-item-num">1.2 </span>Deep Dive Into Calibrated Recommendation</a></span><ul class="toc-item"><li><span><a href="#Calibration-Metric" data-toc-modified-id="Calibration-Metric-1.2.1"><span class="toc-item-num">1.2.1 </span>Calibration Metric</a></span></li><li><span><a href="#Generating-Calibrated-Recommendations" data-toc-modified-id="Generating-Calibrated-Recommendations-1.2.2"><span class="toc-item-num">1.2.2 </span>Generating Calibrated Recommendations</a></span></li></ul></li><li><span><a href="#End-Note" data-toc-modified-id="End-Note-1.3"><span class="toc-item-num">1.3 </span>End Note</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
data_dir = 'movielens-20m-dataset'
# we are working with movie data, but we'll name
# the movie as item to make it more generic to
# all use-cases
user_col = 'userId'
item_col = 'movieId'
value_col = 'rating'
time_col = 'timestamp'
rating_path = os.path.join(data_dir, 'rating.csv')
df_raw = pd.read_csv(rating_path)
print('dimension: ', df_raw.shape)
df_raw.head()
title_col = 'title'
genre_col = 'genres'
item_info_path = os.path.join(data_dir, 'movie.csv')
df_item = pd.read_csv(item_info_path)
df_item = df_item[df_item[genre_col] != '(no genres listed)']
print('dimension: ', df_item.shape)
df_item.head()
class Item:
"""
Data holder for our item.
Parameters
----------
id : int
title : str
genre : dict[str, float]
The item/movie's genre distribution, where the key
represents the genre and value corresponds to the
ratio of that genre.
score : float
Score for the item, potentially generated by some
recommendation algorithm.
"""
def __init__(self, _id, title, genres, score=None):
self.id = _id
self.title = title
self.score = score
self.genres = genres
def __repr__(self):
return self.title
def create_item_mapping(df_item, item_col, title_col, genre_col):
"""Create a dictionary of item id to Item lookup."""
item_mapping = {}
for row in df_item.itertuples():
item_id = getattr(row, item_col)
item_title = getattr(row, title_col)
item_genre = getattr(row, genre_col)
splitted = item_genre.split('|')
genre_ratio = 1. / len(splitted)
item_genre = {genre: genre_ratio for genre in splitted}
item = Item(item_id, item_title, item_genre)
item_mapping[item_id] = item
return item_mapping
item_mapping = create_item_mapping(df_item, item_col, title_col, genre_col)
item_mapping[1]
# convert to implicit feedback data and filter out
# movies that doesn't have any genre
df_rating = df_raw[df_raw[value_col] >= 4.0].copy()
df_rating = df_rating.merge(df_item, on=item_col)
for col in (user_col, item_col):
df_rating[col] = df_rating[col].astype('category')
# the original id are converted to indices to create
# the sparse matrix, so we keep track of the mappings here
# e.g. a userId 1 will correspond to index 0 in our sparse matrix
index2user = df_rating[user_col].cat.categories
index2item = df_rating[item_col].cat.categories
print('dimension: ', df_rating.shape)
df_rating.head()
"""
Explanation: Calibrated Recommendations
When a user has watched, say, 70% romance movies and 30% action movies in the past, then it is reasonable to expect the personalized list of recommended movies to be comprised of 70% romance and 30% action movies as well since we would like to cover the user's diverse set of interests. A recommendation that actually reflect most if not all of the user's interest is considered a Calibrated Recommendation. But the question is, does our recommendation exhibit this trait?
Recommendation algorithm provides personalized user experience past on the user's past historical interaction with the product/system/website. However, when serving the recommendation such as recommendation top 10 movies that we think the user might be interested in, a recommendation engine that is solely measured based on ranking metrics can easily generate recommendations that focus on the main area of interests, resulting the user's other area of interest to be under-represented, or worse, absent in the final recommendation.
To hit the notion home, using the example above, given a user that has watched 70% romance movies and 30% action movies, if we were to solely measure the metric based on precision, we can say we can achieve the best performance by predicting the majority genre, i.e. we will recommend 100% romance movies and we can expect the user to interact with those recommendations 70% of the time. On other other hand, if we were to recommend 70% romance movies and 30% action movies, then we would expect our recommendation to only be correct 0.7 * 0.7 + 0.3 * 0.3 = 58% of the time.
Throughout the rest of this notebook, we will take a look at if the phenomenon of crowding out user's sub-interest occurs with our recommendation, develop a quantitative metric to measure how severe this issue is and implement a post-preprocessing logic that is agnostic of the underlying recommendation algorithm we decided to use to ensure the recommendation becomes more calibrated.
Preparation
We'll be using the publicly available movielens-20m dataset throughout this experiment. We can download it via the following link. There's multiple data under that folder, we can select download all to make things easier.
The algorithm we will be using to generate our recommendation is Bayesian Personalized Ranking, which is a matrix factorization based collaborative filtering algorithm. Readers don't need to be acquainted with this model per se to continue with this notebook as the discussion is model-agnostic and we'll be explaining the syntax. That said, this link contains some resources on this algorithm if it is of interest.
Preparation steps in the next few code chunks involve the following steps:
- rating.csv contains user's rating for each movie. Here, we will focus on implicit data, and follow the usual procedure of simulating binary implicit feedback data (i.e. whether the user enjoyed the movie) by retaining only ratings of 4 stars and higher, while dropping lower ratings.
- movie.csv contains each movies genre tag. We will also eliminate movies that had no genre information attached and create a mapping that stores each movies' genre distribution. In this dataset, each movie $i$ typically has several genres $g$ associated with it, thus we assign equal probabilities $p(g|i)$ to each genre such that $\sum_g p(g|i) = 1$ for each movie $i$. This genre distribution will play a strong role in determining whether our recommendation is well calibrated or not.
End of explanation
"""
def create_user_item_csr_matrix(data, user_col, item_col, value_col):
rows = data[user_col].cat.codes
cols = data[item_col].cat.codes
values = data[value_col].astype(np.float32)
return csr_matrix((values, (rows, cols)))
user_item = create_user_item_csr_matrix(df_rating, user_col, item_col, value_col)
user_item
np.random.seed(1234)
user_item_train, user_item_test = train_test_split(user_item, train_percentage=0.8)
user_item_train
user_item_test
# the model expects item-user sparse matrix,
# i.e. the rows represents item and the column
# represents users
np.random.seed(1234)
bpr = BayesianPersonalizedRanking(iterations=70)
bpr.fit(user_item_train.T.tocsr())
"""
Explanation: Given this dataframe we will use the userId, movieId and rating to construct a sparse matrix, perform the random train/test split (we can split based on the time if preferred) and feed the training set into a collaborative filtering based algorithm to train the model, so we can generate item recommendations for users.
End of explanation
"""
precision = precision_at_k(bpr, user_item_train, user_item_test, K=10)
precision
"""
Explanation: we will look at a precision_at_k metric just to make sure our recommender is reasonable, feel free to tune the model's hyperparameter to squeeze out performance, but that is not the focus here.
End of explanation
"""
# look a the first user
user_id = 0
# find the index that the user interacted with,
# we can then map this to a list of Item, note that we need to first
# map the recommended index to the actual itemId/movieId first
interacted_ids = user_item_train[user_id].nonzero()[1]
interacted_items = [item_mapping[index2item[index]] for index in interacted_ids]
interacted_items[:10]
"""
Explanation: Deep Dive Into Calibrated Recommendation
We will take the first user as an example to see whether our recommendations are calibrated or not. Once we're familiar with the procedure for one user, we can repeat the process for all of the users if we'd like to.
Let's start of by defining the problem. We are given the distribution genres $g$ for each movie $i$, $p(g|i)$, what we are interested is whether $p(g|u)$ is similar to $q(g|u)$. Where:
$p(g|u)$ is the distribution over genre $g$ of the set of movies $H$ played by user $u$ in the past.
\begin{align}
p(g|u) = \sum_{i \in H} p(g|i)
\end{align}
$q(g|u)$ is the distribution over genre $g$ of the set of movies $I$ we recommended to user $u$.
\begin{align}
q(g|u) = \sum_{i \in I} p(g|i)
\end{align}
For these distributions, we can have a weighted version if we liked to get sophisticated. e.g. the $p(g|i)$ can be weighted by recency saying something like the item/movie interaction matters more if its a more recent interaction, indicating that item/movie's genre should also be weighted more, but let's not go there yet.
Let's first look at some code to generate these information.
End of explanation
"""
# it returns the recommended index and their corresponding score
topn = 20
reco = bpr.recommend(user_id, user_item_train, N=topn)
reco[:10]
# map the index to Item
reco_items = [item_mapping[index2item[index]] for index, _ in reco]
reco_items[:10]
"""
Explanation: For the same user, we can use the .recommend method to recommend the topn recommendation for him/her, note that we also passed in the original sparse matrix, and by default, the items/movies that the user has already played will be filtered from the list (controlled by a filter_already_liked_items argument, which defaults to True).
End of explanation
"""
def compute_genre_distr(items):
"""Compute the genre distribution for a given list of Items."""
distr = {}
for item in items:
for genre, score in item.genres.items():
genre_score = distr.get(genre, 0.)
distr[genre] = genre_score + score
# we normalize the summed up probability so it sums up to 1
# and round it to three decimal places, adding more precision
# doesn't add much value and clutters the output
for item, genre_score in distr.items():
normed_genre_score = round(genre_score / len(items), 3)
distr[item] = normed_genre_score
return distr
# we can check that the probability does in fact add up to 1
# np.array(list(interacted_distr.values())).sum()
interacted_distr = compute_genre_distr(interacted_items)
interacted_distr
reco_distr = compute_genre_distr(reco_items)
reco_distr
# change default style figure and font size
plt.rcParams['figure.figsize'] = 10, 8
plt.rcParams['font.size'] = 12
def distr_comparison_plot(interacted_distr, reco_distr, width=0.3):
# the value will automatically be converted to a column with the
# column name of '0'
interacted = pd.DataFrame.from_dict(interacted_distr, orient='index')
reco = pd.DataFrame.from_dict(reco_distr, orient='index')
df = interacted.join(reco, how='outer', lsuffix='_interacted')
n = df.shape[0]
index = np.arange(n)
plt.barh(index, df['0_interacted'], height=width, label='interacted distr')
plt.barh(index + width, df['0'], height=width, label='reco distr')
plt.yticks(index, df.index)
plt.legend(bbox_to_anchor=(1, 0.5))
plt.title('Genre Distribution between User Historical Interaction v.s. Recommendation')
plt.ylabel('Genre')
plt.show()
distr_comparison_plot(interacted_distr, reco_distr)
"""
Explanation: The next code chunk defines a function to obtain the genre distribution for a list of items. Given that we now have the list of interacted items and recommended items, we can pass it to the function to obtain the two genre distributions.
End of explanation
"""
def compute_kl_divergence(interacted_distr, reco_distr, alpha=0.01):
"""
KL (p || q), the lower the better.
alpha is not really a tuning parameter, it's just there to make the
computation more numerically stable.
"""
kl_div = 0.
for genre, score in interacted_distr.items():
reco_score = reco_distr.get(genre, 0.)
reco_score = (1 - alpha) * reco_score + alpha * score
kl_div += score * np.log2(score / reco_score)
return kl_div
compute_kl_divergence(interacted_distr, reco_distr)
"""
Explanation: Calibration Metric
Looking at the results above, we can see that according to $p(g|u)$, the user has interacted with genres such as War, Western, however, they are nowhere to be seen in the topn recommendation to the user, hence we can argue based on the output that our recommendation might not be that well calibrated to the user's past interaction.
To scale this type of comparison, we'll now define our calibration metric $C$. There are various methods to compare whether two distributions are similar to each other, and one popular choice is KL-divergence.
\begin{align}
C(p,q) = D_{KL}(p || q) = \sum_{g} p(g|u) \cdot \log \frac{p(g|u)}{\tilde{q}(g|u)}
\end{align}
The denominator in the formula should be $q(g|u)$, but given that the formula would be undefined if $q(g|u) = 0$ and $p(g|u) > 0$ for a genre $g$. We instead use:
\begin{align}
\tilde{q}(g|u) = (1 - \alpha) \cdot q(g|u) + \alpha \cdot p(g|u)
\end{align}
with a small $\alpha$ such as 0.01, so that $q(g|u) \approx \tilde{q}(g|u)$.
End of explanation
"""
def generate_item_candidates(model, user_item, user_id, index2item, item_mapping,
filter_already_liked_items=True):
"""
For a given user, generate the list of items that we can recommend, during this
step, we will also attach the recommender's score to each item.
"""
n_items = user_item.shape[1]
# this is how implicit's matrix factorization generates
# the scores for each item for a given user, modify this
# part of the logic if we were to use a completely different
# algorithm to generate the ranked items
user_factor = model.user_factors[user_id]
scores = model.item_factors.dot(user_factor)
liked = set()
if filter_already_liked_items:
liked = set(user_item[user_id].indices)
item_ids = set(np.arange(n_items))
item_ids -= liked
items = []
for item_id in item_ids:
item = item_mapping[index2item[item_id]]
item.score = scores[item_id]
items.append(item)
return items
items = generate_item_candidates(bpr, user_item_train, user_id, index2item, item_mapping)
print('number of item candidates:', len(items))
items[:5]
def compute_utility(reco_items, interacted_distr, lmbda=0.5):
"""
Our objective function for computing the utility score for
the list of recommended items.
lmbda : float, 0.0 ~ 1.0, default 0.5
Lambda term controls the score and calibration tradeoff,
the higher the lambda the higher the resulting recommendation
will be calibrated. Lambda is keyword in Python, so it's
lmbda instead ^^
"""
reco_distr = compute_genre_distr(reco_items)
kl_div = compute_kl_divergence(interacted_distr, reco_distr)
total_score = 0.0
for item in reco_items:
total_score += item.score
# kl divergence is the lower the better, while score is
# the higher the better so remember to negate it in the calculation
utility = (1 - lmbda) * total_score - lmbda * kl_div
return utility
def calib_recommend(items, interacted_distr, topn, lmbda=0.5):
"""
start with an empty recommendation list,
loop over the topn cardinality, during each iteration
update the list with the item that maximizes the utility function.
"""
calib_reco = []
for _ in range(topn):
max_utility = -np.inf
for item in items:
if item in calib_reco:
continue
utility = compute_utility(calib_reco + [item], interacted_distr, lmbda)
if utility > max_utility:
max_utility = utility
best_item = item
calib_reco.append(best_item)
return calib_reco
start = time.time()
calib_reco_items = calib_recommend(items, interacted_distr, topn, lmbda=0.99)
elapsed = time.time() - start
print('elapsed: ', elapsed)
calib_reco_items
"""
Explanation: Generating Calibrated Recommendations
Being able to compute the calibration metric between $p(g|u)$ and $q(g|u)$ is all well and good, but how can we generate a recommendation list that is more calibrated becomes the next important and interesting question.
Different recommendation algorithm's objective function might be completely different, thus instead of going to hard-route of incorporating it into the objective function right off the bat and spend two weeks writing the customized algorithm in an efficient manner, we will start with an alternative approach of re-ranking the predicted list of a recommender system in a post-processing step.
To determine the optimal set $I^*$ of $N$ recommended items, we'll be using maximum marginal relevance.
\begin{align}
I^* = \underset{I, |I|=N}{\text{argmax}} \; (1 - \lambda) \cdot s(I) - \lambda \cdot C(p, q(I))
\end{align}
Where
$s(i)$ is the score of the items $i \in I$ predicted by the recommender system and $s(I) = \sum_{i \in I} s(i)$, i.e. the sum of all the items' score in the recommendation list.
$\lambda \in [0, 1]$ is a tuning parameter that determines the trade-off between the score generated by the recommender and the calibration score, notice that since the calibration score is measured by KL-divergence, which is a metric that's the lower the better we use its negative in the maximization formula.
Finding the optimal set $I^*$ is a combinatorial optimization problem and can be a topic by itself. We won't do a deep dive into it, but instead leverage a popular greedy submodular optimization to solve this problem. The process is as follows:
We start out with the empty set.
Iteratively append one item $i$ at a time, and at step $n$, when we already have the set $I_{n-1}$ comprised of $n - 1$ items, the item $i$ that maximizes the objective function defined above for the set $I_{n-1} \cup {i}$ is added to obtain $I_n$
Repeat the process the generate the full $I^*$ of size $N$.
From a theoretical standpoint, this procedure guarantees a solution that has a score of 0.63 of the optimal set.
With these information at hand, let's look at the implementation part:
End of explanation
"""
calib_reco_distr = compute_genre_distr(calib_reco_items)
calib_reco_kl_div = compute_kl_divergence(interacted_distr, calib_reco_distr)
reco_kl_div = compute_kl_divergence(interacted_distr, reco_distr)
print('\noriginal reco kl-divergence score:', reco_kl_div)
print('calibrated reco kl-divergence score:', calib_reco_kl_div)
distr_comparison_plot(interacted_distr, calib_reco_distr)
"""
Explanation: In the code chunk above, we turned the $\lambda$ knob extremely high to generate the most calibrated recommendation list possible. Let's now compare the calibrated recommendation (which only optimizes for score, $s$), the original recommendation and the user's interaction distribution.
End of explanation
"""
def precision(user_item, user_id, reco_items, index2item):
indptr = user_item.indptr
indices = user_item.indices
reco_ids = {item.id for item in reco_items}
likes = {index2item[indices[i]] for i in range(indptr[user_id], indptr[user_id + 1])}
relevant = len(reco_ids & likes)
total = min(len(reco_items), len(likes))
return relevant / total
reco_precision = precision(user_item_test, user_id, reco_items, index2item)
calib_reco_precision = precision(user_item_test, user_id, calib_reco_items, index2item)
print('original reco precision score:', reco_precision)
print('calibrated reco precision score:', calib_reco_precision)
"""
Explanation: Printing out the genre distribution from the calibrated recommendation list shows that this list covers more genre and its distribution closely resembles the distribution of the user's past historical interaction and our quantitative calibration metric, KL-divergence also confirms this. i.e. the calibrated recommendation's KL-divergence is lower than the original recommendation's score.
Thankfully from the results above, it seems that the re-ranked recommendation list that aims to maximize calibration score does in fact generate a more calibrated list. But the question is at what cost? Does other ranking metrics that recommender system often optimize for drop? Let's take a look at precision_at_k. Here the number for k is the topn parameter that we've defined earlier. i.e. the number of recommendations to generate for the user.
End of explanation
"""
start = time.time()
calib_reco_items = calib_recommend(items, interacted_distr, topn, lmbda=0.5)
elapsed = time.time() - start
print('elapsed: ', elapsed)
calib_reco_items
calib_reco_distr = compute_genre_distr(calib_reco_items)
calib_reco_kl_div = compute_kl_divergence(interacted_distr, calib_reco_distr)
calib_reco_precision = precision(user_item_test, user_id, calib_reco_items, index2item)
print('calibrated reco kl-divergence score:', calib_reco_kl_div)
print('calibrated reco precision score:', calib_reco_precision)
calib_reco_distr = compute_genre_distr(calib_reco_items)
distr_comparison_plot(interacted_distr, calib_reco_distr)
"""
Explanation: Well ..., it's not a surprise that the calibrated recommendation list's precision score is a bit disappointing compared to the original recommendation. But let's see if we try a different value of $\lambda$, this time turning it down a bit to strike a balance between calibration and precision.
End of explanation
"""
topn = 20
user_id = 0
lmbda = 0.99
reco = bpr.recommend(user_id, user_item_train, N=topn)
reco_items = [item_mapping[index2item[index]] for index, _ in reco]
reco_distr = compute_genre_distr(reco_items)
interacted_ids = user_item_train[user_id].nonzero()[1]
interacted_items = [item_mapping[index2item[index]] for index in interacted_ids]
interacted_distr = compute_genre_distr(interacted_items)
items = generate_item_candidates(bpr, user_item_train, user_id, index2item, item_mapping)
calib_reco_items = calib_recommend(items, interacted_distr, topn, lmbda)
calib_reco_distr = compute_genre_distr(calib_reco_items)
calib_reco_kl_div = compute_kl_divergence(interacted_distr, calib_reco_distr)
calib_reco_precision = precision(user_item_test, user_id, calib_reco_items, index2item)
print('calibrated reco kl-divergence score:', calib_reco_kl_div)
print('calibrated reco precision score:', calib_reco_precision)
distr_comparison_plot(interacted_distr, calib_reco_distr)
reco_kl_div = compute_kl_divergence(interacted_distr, reco_distr)
reco_precision = precision(user_item_test, user_id, reco_items, index2item)
print('original reco kl-divergence score:', reco_kl_div)
print('original reco precision score:', reco_precision)
distr_comparison_plot(interacted_distr, reco_distr)
"""
Explanation: Well, well, well. It turns out calibration can be improved considerably while accuracy is reduced only slightly if we find the sweet spot for the tuning parameter $\lambda$.
The following code chunk curates all the code to generate the calibrated recommendation, the original recommendation and compare it with the user's historical interaction in one place for ease of tracking the flow. This process is outlined for 1 user, feel free to modify the code to perform this comparison across all users and due to the randomness in the recommendation algorithm, the results might differ across runs, but the underlying trend should remain the same.
End of explanation
"""
|
statsmodels/statsmodels.github.io
|
v0.13.1/examples/notebooks/generated/recursive_ls.ipynb
|
bsd-3-clause
|
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from pandas_datareader.data import DataReader
np.set_printoptions(suppress=True)
"""
Explanation: Recursive least squares
Recursive least squares is an expanding window version of ordinary least squares. In addition to availability of regression coefficients computed recursively, the recursively computed residuals the construction of statistics to investigate parameter instability.
The RecursiveLS class allows computation of recursive residuals and computes CUSUM and CUSUM of squares statistics. Plotting these statistics along with reference lines denoting statistically significant deviations from the null hypothesis of stable parameters allows an easy visual indication of parameter stability.
Finally, the RecursiveLS model allows imposing linear restrictions on the parameter vectors, and can be constructed using the formula interface.
End of explanation
"""
print(sm.datasets.copper.DESCRLONG)
dta = sm.datasets.copper.load_pandas().data
dta.index = pd.date_range("1951-01-01", "1975-01-01", freq="AS")
endog = dta["WORLDCONSUMPTION"]
# To the regressors in the dataset, we add a column of ones for an intercept
exog = sm.add_constant(
dta[["COPPERPRICE", "INCOMEINDEX", "ALUMPRICE", "INVENTORYINDEX"]]
)
"""
Explanation: Example 1: Copper
We first consider parameter stability in the copper dataset (description below).
End of explanation
"""
mod = sm.RecursiveLS(endog, exog)
res = mod.fit()
print(res.summary())
"""
Explanation: First, construct and fit the model, and print a summary. Although the RLS model computes the regression parameters recursively, so there are as many estimates as there are datapoints, the summary table only presents the regression parameters estimated on the entire sample; except for small effects from initialization of the recursions, these estimates are equivalent to OLS estimates.
End of explanation
"""
print(res.recursive_coefficients.filtered[0])
res.plot_recursive_coefficient(range(mod.k_exog), alpha=None, figsize=(10, 6))
"""
Explanation: The recursive coefficients are available in the recursive_coefficients attribute. Alternatively, plots can generated using the plot_recursive_coefficient method.
End of explanation
"""
print(res.cusum)
fig = res.plot_cusum()
"""
Explanation: The CUSUM statistic is available in the cusum attribute, but usually it is more convenient to visually check for parameter stability using the plot_cusum method. In the plot below, the CUSUM statistic does not move outside of the 5% significance bands, so we fail to reject the null hypothesis of stable parameters at the 5% level.
End of explanation
"""
res.plot_cusum_squares()
"""
Explanation: Another related statistic is the CUSUM of squares. It is available in the cusum_squares attribute, but it is similarly more convenient to check it visually, using the plot_cusum_squares method. In the plot below, the CUSUM of squares statistic does not move outside of the 5% significance bands, so we fail to reject the null hypothesis of stable parameters at the 5% level.
End of explanation
"""
start = "1959-12-01"
end = "2015-01-01"
m2 = DataReader("M2SL", "fred", start=start, end=end)
cpi = DataReader("CPIAUCSL", "fred", start=start, end=end)
def ewma(series, beta, n_window):
nobs = len(series)
scalar = (1 - beta) / (1 + beta)
ma = []
k = np.arange(n_window, 0, -1)
weights = np.r_[beta ** k, 1, beta ** k[::-1]]
for t in range(n_window, nobs - n_window):
window = series.iloc[t - n_window : t + n_window + 1].values
ma.append(scalar * np.sum(weights * window))
return pd.Series(ma, name=series.name, index=series.iloc[n_window:-n_window].index)
m2_ewma = ewma(np.log(m2["M2SL"].resample("QS").mean()).diff().iloc[1:], 0.95, 10 * 4)
cpi_ewma = ewma(
np.log(cpi["CPIAUCSL"].resample("QS").mean()).diff().iloc[1:], 0.95, 10 * 4
)
"""
Explanation: Example 2: Quantity theory of money
The quantity theory of money suggests that "a given change in the rate of change in the quantity of money induces ... an equal change in the rate of price inflation" (Lucas, 1980). Following Lucas, we examine the relationship between double-sided exponentially weighted moving averages of money growth and CPI inflation. Although Lucas found the relationship between these variables to be stable, more recently it appears that the relationship is unstable; see e.g. Sargent and Surico (2010).
End of explanation
"""
fig, ax = plt.subplots(figsize=(13, 3))
ax.plot(m2_ewma, label="M2 Growth (EWMA)")
ax.plot(cpi_ewma, label="CPI Inflation (EWMA)")
ax.legend()
endog = cpi_ewma
exog = sm.add_constant(m2_ewma)
exog.columns = ["const", "M2"]
mod = sm.RecursiveLS(endog, exog)
res = mod.fit()
print(res.summary())
res.plot_recursive_coefficient(1, alpha=None)
"""
Explanation: After constructing the moving averages using the $\beta = 0.95$ filter of Lucas (with a window of 10 years on either side), we plot each of the series below. Although they appear to move together prior for part of the sample, after 1990 they appear to diverge.
End of explanation
"""
res.plot_cusum()
"""
Explanation: The CUSUM plot now shows substantial deviation at the 5% level, suggesting a rejection of the null hypothesis of parameter stability.
End of explanation
"""
res.plot_cusum_squares()
"""
Explanation: Similarly, the CUSUM of squares shows substantial deviation at the 5% level, also suggesting a rejection of the null hypothesis of parameter stability.
End of explanation
"""
endog = dta["WORLDCONSUMPTION"]
exog = sm.add_constant(
dta[["COPPERPRICE", "INCOMEINDEX", "ALUMPRICE", "INVENTORYINDEX"]]
)
mod = sm.RecursiveLS(endog, exog, constraints="COPPERPRICE = ALUMPRICE")
res = mod.fit()
print(res.summary())
"""
Explanation: Example 3: Linear restrictions and formulas
Linear restrictions
It is not hard to implement linear restrictions, using the constraints parameter in constructing the model.
End of explanation
"""
mod = sm.RecursiveLS.from_formula(
"WORLDCONSUMPTION ~ COPPERPRICE + INCOMEINDEX + ALUMPRICE + INVENTORYINDEX",
dta,
constraints="COPPERPRICE = ALUMPRICE",
)
res = mod.fit()
print(res.summary())
"""
Explanation: Formula
One could fit the same model using the class method from_formula.
End of explanation
"""
|
daniel-koehn/Theory-of-seismic-waves-II
|
00_Intro_Python_Jupyter_notebooks/5_Linear_Regression_with_Real_Data.ipynb
|
gpl-3.0
|
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
"""
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2017 L.A. Barba, N.C. Clementi
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('gGOzHVUQCw0')
"""
Explanation: Linear regression with real data
Earth temperature over time
In this lesson, we will apply all that we've learned (and more) to analyze real data of Earth temperature over time.
Is global temperature rising? How much? This is a question of burning importance in today's world!
Data about global temperatures are available from several sources: NASA, the National Climatic Data Center (NCDC) and the University of East Anglia in the UK. Check out the University Corporation for Atmospheric Research (UCAR) for an in-depth discussion.
The NASA Goddard Space Flight Center is one of our sources of global climate data. They produced the video below showing a color map of the changing global surface temperature anomalies from 1880 to 2015.
The term global temperature anomaly means the difference in temperature with respect to a reference value or a long-term average. It is a very useful way of looking at the problem and in many ways better than absolute temperature. For example, a winter month may be colder than average in Washington DC, and also in Miami, but the absolute temperatures will be different in both places.
End of explanation
"""
import numpy
"""
Explanation: How would we go about understanding the trends from the data on global temperature?
The first step in analyzing unknown data is to generate some simple plots using Matplotlib. We are going to look at the temperature-anomaly history, contained in a file, and make our first plot to explore this data.
We are going to smooth the data and then we'll fit a line to it to find a trend, plotting along the way to see how it all looks.
Let's get started!
Step 1: Read a data file
We took the data from the NOAA (National Oceanic and Atmospheric Administration) webpage. Feel free to play around with the webpage and analyze data on your own, but for now, let's make sure we're working with the same dataset.
We have a file named land_global_temperature_anomaly-1880-2016.csv in our data folder. This file contains the year on the first column, and averages of land temperature anomaly listed sequentially on the second column, from the year 1880 to 2016. We will load the file, then make an initial plot to see what it looks like.
Note:
If you downloaded this notebook alone, rather than the full collection for this course, you may not have the data file on the location we assume below. In that case, you can download the data if you add a code cell, and execute the following code in it:
Python
from urllib.request import urlretrieve
URL = 'http://go.gwu.edu/engcomp1data5?accessType=DOWNLOAD'
urlretrieve(URL, 'land_global_temperature_anomaly-1880-2016.csv')
The data file will be downloaded to your working directory, and you will then need to remove the path information, i.e., the string '../../data/', from the definition of the variable fname below.
Let's start by importing NumPy.
End of explanation
"""
fname = 'data/land_global_temperature_anomaly-1880-2016.csv'
year, temp_anomaly = numpy.loadtxt(fname, delimiter=',', skiprows=5, unpack=True)
"""
Explanation: To load our data from the file, we'll use the function numpy.loadtxt(), which lets us immediately save the data into NumPy arrays. (We encourage you to read the documentation for details on how the function works.) Here, we'll save the data into the arrays year and temp_anomaly.
End of explanation
"""
from matplotlib import pyplot
%matplotlib inline
"""
Explanation: Exercise
Inspect the data by printing year and temp_anomaly.
Step 2: Plot the data
Let's first load the Matplotlib module called pyplot, for making 2D plots. Remember that to get the plots inside the notebook, we use a special "magic" command, %matplotlib inline:
End of explanation
"""
pyplot.plot(year, temp_anomaly);
"""
Explanation: The plot() function of the pyplot module makes simple line plots. We avoid that stuff that appeared on top of the figure, that Out[x]: [< ...>] ugliness, by adding a semicolon at the end of the plotting command.
End of explanation
"""
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
"""
Explanation: Now we have a line plot, but if you see this plot without any information you would not be able to figure out what kind of data it is! We need labels on the axes, a title and why not a better color, font and size of the ticks.
Publication quality plots should always be your standard for plotting.
How you present your data will allow others (and probably you in the future) to better understand your work.
We can customize the style of our plots using Matplotlib's rcParams. It lets us set some style options that apply for all the plots we create in the current session.
Here, we'll make the font of a specific size and type. You can also customize other parameters like line width, color, and so on (check out the documentation).
End of explanation
"""
#You can set the size of the figure by doing:
pyplot.figure(figsize=(10,5))
#Plotting
pyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1)
pyplot.title('Land global temperature anomalies. \n')
pyplot.xlabel('Year')
pyplot.ylabel('Land temperature anomaly [°C]')
pyplot.grid();
"""
Explanation: We'll redo the same plot, but now we'll add a few things to make it prettier and publication quality. We'll add a title, label the axes and, show a background grid. Study the commands below and looks at the result!
End of explanation
"""
?print
"""
Explanation: Better, no? Feel free to play around with the parameters and see how the plot changes. There's nothing like trial and error to get the hang of it.
Step 3: Least-squares linear regression
In order to have an idea of the general behavior of our data, we can find a smooth curve that (approximately) fits the points. We generally look for a curve that's simple (e.g., a polynomial), and does not reproduce the noise that's always present in experimental data.
Let $f(x)$ be the function that we'll fit to the $n+1$ data points: $(x_i, y_i)$, $i = 0, 1, ... ,n$:
$$
f(x) = f(x; a_0, a_1, ... , a_m)
$$
The notation above means that $f$ is a function of $x$, with $m+1$ variable parameters $a_0, a_1, ... , a_m$, where $m < n$. We need to choose the form of $f(x)$ a priori, by inspecting the experimental data and knowing something about the phenomenon we've measured. Thus, curve fitting consists of two steps:
Choosing the form of $f(x)$.
Computing the parameters that will give us the "best fit" to the data.
What is the "best" fit?
When the noise in the data is limited to the $y$-coordinate, it's common to use a least-squares fit, which minimizes the function
$$
\begin{equation}
S(a_0, a_1, ... , a_m) = \sum_{i=0}^{n} [y_i - f(x_i)]^2
\end{equation}
$$
with respect to each $a_j$. We find the values of the parameters for the best fit by solving the following equations:
$$
\begin{equation}
\frac{\partial{S}}{\partial{a_k}} = 0, \quad k = 0, 1, ... , m.
\end{equation}
$$
Here, the terms $r_i = y_i - f(x_i)$ are called residuals: they tell us the discrepancy between the data and the fitting function at $x_i$.
Take a look at the function $S$: what we want to minimize is the sum of the squares of the residuals. The equations (2) are generally nonlinear in $a_j$ and might be difficult to solve. Therefore, the fitting function is commonly chosen as a linear combination of specified functions $f_j(x)$,
$$
\begin{equation}
f(x) = a_0f_0(x) + a_1f_1(x) + ... + a_mf_m(x)
\end{equation}
$$
which results in equations (2) being linear. In the case that the fitting function is polynomial, we have have $f_0(x) = 1, \; f_1(x) = x, \; f_2(x) = x^2$, and so on.
Linear regression
When we talk about linear regression we mean "fitting a straight line to the data." Thus,
$$
\begin{equation}
f(x) = a_0 + a_1x
\end{equation}
$$
In this case, the function that we'll minimize is:
$$
\begin{equation}
S(a_0, a_1) = \sum_{i=0}^{n} [y_i - f(x_i)]^2 = \sum_{i=0}^{n} (y_i - a_0 - a_1x_i)^2
\end{equation}
$$
Equations (2) become:
$$
\begin{equation}
\frac{\partial{S}}{\partial{a_0}} = \sum_{i=0}^{n} -2(y_i - a_0 - a_1x_i) = 2 \left[ a_0(n+1) + a_1\sum_{i=0}^{n} x_i - \sum_{i=0}^{n} y_i \right] = 0
\end{equation}
$$
$$
\begin{equation}
\frac{\partial{S}}{\partial{a_1}} = \sum_{i=0}^{n} -2(y_i - a_0 - a_1x_i)x_i = 2 \left[ a_0\sum_{i=0}^{n} x_i + a_1\sum_{i=0}^{n} x_{i}^2 - \sum_{i=0}^{n} x_iy_i \right] = 0
\end{equation}
$$
Let's divide both equations by $2(n+1)$ and rearrange terms.
Rearranging (6) and (7):
$$
\begin{align}
2 \left[ a_0(n+1) + a_1\sum_{i=0}^{n} x_i - \sum_{i=0}^{n} y_i \right] &= 0 \nonumber \
\frac{a_0(n+1)}{n+1} + a_1 \frac{\sum_{i=0}^{n} x_i}{n+1} - \frac{\sum_{i=0}^{n} y_i}{n+1} &= 0 \
\end{align}
$$
$$
\begin{align}
a_0 = \bar{y} - a_1\bar{x}
\end{align}
$$
where $\bar{x} = \frac{\sum_{i=0}^{n} x_i}{n+1}$ and $\bar{y} = \frac{\sum_{i=0}^{n} y_i}{n+1}$.
Rearranging (7):
$$
\begin{align}
2 \left[ a_0\sum_{i=0}^{n} x_i + a_1\sum_{i=0}^{n} x_{i}^2 - \sum_{i=0}^{n} x_iy_i \right] &= 0 \
a_0\sum_{i=0}^{n} x_i + a_1\sum_{i=0}^{n} x_{i}^2 - \sum_{i=0}^{n} x_iy_i &=0 \
\end{align}
$$
Now, if we replace $a_0$ from equation (8) into (9) and rearrange terms:
$$
\begin{align}
(\bar{y} - a_1\bar{x})\sum_{i=0}^{n} x_i + a_1\sum_{i=0}^{n} x_{i}^2 - \sum_{i=0}^{n} x_iy_i &= 0 \
\end{align}
$$
Replacing the definitions of the mean values into the equation,
$$
\begin{align}
\left[\frac{1}{n+1}\sum_{i=0}^{n} y_i - \frac{a_1}{n+1}\sum_{i=0}^{n} x_i \right]\sum_{i=0}^{n} x_i + a_1\sum_{i=0}^{n} x_{i}^2 - \sum_{i=0}^{n} x_iy_i &= 0 \
\frac{1}{n+1}\sum_{i=0}^{n} y_i \sum_{i=0}^{n} x_i - \frac{a_1}{n+1}\sum_{i=0}^{n} x_i \sum_{i=0}^{n} x_i + a_1\sum_{i=0}^{n} x_{i}^2 - \sum_{i=0}^{n} x_iy_i &= 0 \
\end{align}
$$
Leaving everything in terms of $\bar{x}$,
$$
\begin{align}
\sum_{i=0}^{n} y_i \bar{x} - a_1\sum_{i=0}^{n} x_i \bar{x} + a_1\sum_{i=0}^{n} x_{i}^2 - \sum_{i=0}^{n} x_iy_i = 0
\end{align}
$$
Grouping the terms that have $a_1$ on the left-hand side and the rest on the right-hand side:
$$
\begin{align}
a_1\left[ \sum_{i=0}^{n} x_{i}^2 - \sum_{i=0}^{n} x_i \bar{x}\right] &= \sum_{i=0}^{n} x_iy_i - \sum_{i=0}^{n} y_i \bar{x} \
a_1 \sum_{i=0}^{n} (x_{i}^2 - x_i \bar{x}) &= \sum_{i=0}^{n} (x_iy_i - y_i \bar{x}) \
a_1 \sum_{i=0}^{n} x_{i}(x_{i} -\bar{x}) &= \sum_{i=0}^{n} y_i(x_i - \bar{x})
\end{align}
$$
Finally, we get that:
$$
\begin{align}
a_1 = \frac{ \sum_{i=0}^{n} y_{i} (x_i - \bar{x})}{\sum_{i=0}^{n} x_i (x_i - \bar{x})}
\end{align}
$$
Then our coefficients are:
$$
\begin{align}
a_1 = \frac{ \sum_{i=0}^{n} y_{i} (x_i - \bar{x})}{\sum_{i=0}^{n} x_i (x_i - \bar{x})} \quad , \quad a_0 = \bar{y} - a_1\bar{x}
\end{align}
$$
Let's fit!
Let's now fit a straight line through the temperature-anomaly data, to see the trend over time. We'll use least-squares linear regression to find the slope and intercept of a line
$$y = a_1x+a_0$$
that fits our data.
In our case, the x-data corresponds to year, and the y-data is temp_anomaly. To calculate our coefficients with the formula above, we need the mean values of our data. Sine we'll need to compute the mean for both x and y, it could be useful to write a custom Python function that computes the mean for any array, and we can then reuse it.
It is good coding practice to avoid repeating ourselves: we want to write code that is reusable, not only because it leads to less typing but also because it reduces errors. If you find yourself doing the same calculation multiple times, it's better to encapsulate it into a function.
Remember the key concept from Lesson 1: A function is a compact collection of code that executes some action on its arguments.
Once defined, you can call a function as many times as you want. When we call a function, we execute all the code inside the function. The result of the execution depends on the definition of the function and on the values that are passed into it as arguments. Functions might or might not return values in their last operation.
The syntax for defining custom Python functions is:
python
def function_name(arg_1, arg_2, ...):
'''
docstring: description of the function
'''
<body of the function>
The docstring of a function is a message from the programmer documenting what he or she built. Docstrings should be descriptive and concise. They are important because they explain (or remind) the intended use of the function to the users. You can later access the docstring of a function using the function help() and passing the name of the function. If you are in a notebook, you can also prepend a question mark '?' before the name of the function and run the cell to display the information of a function.
Try it!
End of explanation
"""
help(print)
"""
Explanation: Using the help function instead:
End of explanation
"""
def mean_value(array):
""" Calculate the mean value of an array
Arguments
---------
array: Numpy array
Returns
-------
mean: mean value of the array
"""
sum_elem = 0
for element in array:
sum_elem += element # this is the same as sum_elem = sum_elem + element
mean = sum_elem / len(array)
return mean
"""
Explanation: Let's define a custom function that calculates the mean value of any array. Study the code below carefully.
End of explanation
"""
year_mean = mean_value(year)
print(year_mean)
temp_anomaly_mean = mean_value(temp_anomaly)
print(temp_anomaly_mean)
"""
Explanation: Once you execute the cell above, the functionmean_value() becomes available to use on any argument of the correct type. This function works on arrays of any length. We can try it now with our data.
End of explanation
"""
a_1 = numpy.sum(temp_anomaly*(year - year_mean)) / numpy.sum(year*(year - year_mean))
print(a_1)
a_0 = temp_anomaly_mean - a_1*year_mean
print(a_0)
"""
Explanation: Neat! You learned how to write a Python function, and we wrote one for computing the mean value of an array of numbers. We didn't have to, though, because NumPy has a built-in function to do just what we needed: numpy.mean().
Exercise
Calculate the mean of the year and temp_anomaly arrays using the NumPy built-in function, and compare the results with the ones obtained using our custom mean_value function.
Now that we have mean values, we can compute our coefficients by following equations (12). We first calculate $a_1$ and then use that value to calculate $a_0$.
Our coefficients are:
$$
a_1 = \frac{ \sum_{i=0}^{n} y_{i} (x_i - \bar{x})}{\sum_{i=0}^{n} x_i (x_i - \bar{x})} \quad , \quad a_0 = \bar{y} - a_1\bar{x}
$$
We already calculated the mean values of the data arrays, but the formula requires two sums over new derived arrays. Guess what, NumPy has a built-in function for that: numpy.sum(). Study the code below.
End of explanation
"""
reg = a_0 + a_1 * year
"""
Explanation: Exercise
Write a function that computes the coefficients, call the function to compute them and compare the result with the values we obtained before. As a hint, we give you the structure that you should follow:
```python
def coefficients(x, y, x_mean, y_mean):
"""
Write docstrings here
"""
a_1 =
a_0 =
return a_1, a_0
```
We now have the coefficients of a linear function that best fits our data. With them, we can compute the predicted values of temperature anomaly, according to our fit. Check again the equations above: the values we are going to compute are $f(x_i)$.
Let's call reg the array obtined from evaluating $f(x_i)$ for all years.
End of explanation
"""
pyplot.figure(figsize=(10, 5))
pyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1, alpha=0.5)
pyplot.plot(year, reg, 'k--', linewidth=2, label='Linear regression')
pyplot.xlabel('Year')
pyplot.ylabel('Land temperature anomaly [°C]')
pyplot.legend(loc='best', fontsize=15)
pyplot.grid();
"""
Explanation: With the values of our linear regression, we can plot it on top of the original data to see how they look together. Study the code below.
End of explanation
"""
# First fit with NumPy, then name the coefficients obtained a_1n, a_0n:
a_1n, a_0n = numpy.polyfit(year, temp_anomaly, 1)
f_linear = numpy.poly1d((a_1n, a_0n))
print(a_1n)
print(a_0n)
print(f_linear)
pyplot.figure(figsize=(10, 5))
pyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1, alpha=0.5)
pyplot.plot(year, f_linear(year), 'k--', linewidth=2, label='Linear regression')
pyplot.xlabel('Year')
pyplot.ylabel('Land temperature anomaly [°C]')
pyplot.legend(loc='best', fontsize=15)
pyplot.grid();
"""
Explanation: Step 4: Apply regression using NumPy
Above, we coded linear regression from scratch. But, guess what: we didn't have to because NumPy has built-in functions that do what we need!
Yes! Python and NumPy are here to help! With polyfit(), we get the slope and $y$-intercept of the line that best fits the data. With poly1d(), we can build the linear function from its slope and $y$-intercept.
Check it out:
End of explanation
"""
numpy.where(year==1970)
"""
Explanation: "Split regression"
If you look at the plot above, you might notice that around 1970 the temperature starts increasing faster that the previous trend. So maybe one single straight line does not give us a good-enough fit.
What if we break the data in two (before and after 1970) and do a linear regression in each segment?
To do that, we first need to find the position in our year array where the year 1970 is located. Thankfully, NumPy has a function called numpy.where() that can help us. We pass a condition and numpy.where() tells us where in the array the condition is True.
End of explanation
"""
year[0:3]
"""
Explanation: To split the data, we use the powerful instrument of slicing with the colon notation. Remember that a colon between two indices indicates a range of values from a start to an end. The rule is that [start:end] includes the element at index start but excludes the one at index end. For example, to grab the first 3 years in our year array, we do:
End of explanation
"""
year_1 , temp_anomaly_1 = year[0:90], temp_anomaly[0:90]
year_2 , temp_anomaly_2 = year[90:], temp_anomaly[90:]
m1, b1 = numpy.polyfit(year_1, temp_anomaly_1, 1)
m2, b2 = numpy.polyfit(year_2, temp_anomaly_2, 1)
f_linear_1 = numpy.poly1d((m1, b1))
f_linear_2 = numpy.poly1d((m2, b2))
pyplot.figure(figsize=(10, 5))
pyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1, alpha=0.5)
pyplot.plot(year_1, f_linear_1(year_1), 'g--', linewidth=2, label='1880-1969')
pyplot.plot(year_2, f_linear_2(year_2), 'r--', linewidth=2, label='1970-2016')
pyplot.xlabel('Year')
pyplot.ylabel('Land temperature anomaly [°C]')
pyplot.legend(loc='best', fontsize=15)
pyplot.grid();
"""
Explanation: Now we know how to split our data in two sets, to get two regression lines. We need two slices of the arrays year and temp_anomaly, which we'll save in new variable names below. After that, we complete two linear fits using the helpful NumPy functions we learned above.
End of explanation
"""
|
MMesch/SHTOOLS
|
examples/notebooks/tutorial_2.ipynb
|
bsd-3-clause
|
%matplotlib inline
from __future__ import print_function # only necessary if using Python 2.x
import matplotlib.pyplot as plt
import numpy as np
from pyshtools.shclasses import SHCoeffs, SHWindow, SHGrid
nl = 100 # l = [0, 199]
lmax = nl - 1
a = 4 # scale length
ls = np.arange(nl, dtype=np.float)
power = 1. / (1. + (ls / a) ** 2) ** 1.5
"""
Explanation: Localized Spectral Analysis on the Sphere
A localised spectral estimate is the spectrum of a global function multiplied by a localization window. In a 2D Cartesian coordinate system, this multiplication becomes a convolution in the Fourier domain. As a result of the convolution, a localized spectral estimate for a given wavelength depends upon a range of coefficients of both the global function and localization window. One says that the window couples the global Fourier coefficients to the localized coefficients.
Windowing on the sphere leads to similar effects: Multiplication of a global function by a window leads to coupling of the spherical harmonic coefficients of the globally defined function to the localized spectrum. In the case of an isotropic and stationary function, a coupling matrix describes the relation between the global power to the localized power, and at high degrees, this matrix operation resembles a convolution.
Generate a random model using the SHCoeffs class interface
We first generate a random model using the SHCoeffs class interface. This class simplifies coefficient generation and provides a simple access to most SHTOOLS functions. We start by defining a global power spectrum that follows a power-law decay:
End of explanation
"""
coeffs_global = SHCoeffs.from_random(power)
power_global = coeffs_global.spectrum()
coeffs_global.plot_spectrum(unit='per_dlogl')
grid_global = coeffs_global.expand('DH2')
grid_global.plot()
"""
Explanation: Next, we generate random coefficients from this input power spectrum, plot the power spectrum of the random realization, and expand the coeffificients on a grid:
End of explanation
"""
latgrid, longrid = np.meshgrid(grid_global.lats(), grid_global.lons(), indexing='ij')
window = (-40 < latgrid) & (latgrid < -30) & (10 < longrid) & (longrid < 30)
window += (0 < latgrid) & (latgrid < 30) & (60 < longrid) & (longrid < 80)
window += (-70 < latgrid) & (latgrid < -40) & (130 < longrid) & (longrid < 150)
window += (20 < latgrid) & (latgrid < 40) & (125 < longrid) & (longrid < 145)
window += (10 < latgrid) & (latgrid < 30) & (220 < longrid) & (longrid < 250)
"""
Explanation: Generate a box window function
We next generate a window function that picks a few local regions from the globally defined model. In this example, the window contains sharp boundaries, so the spherical harmonic bandwidth of the function is infinite. In general, this could severely bias the localized power spectrum away from its global value, especially when the power spectrum follows a power law. To combat this negative characteristic, we will show in a different tutorial how to construct windows with a specified spherical harmonic bandwith that are optimally concentrated within the provided region.
This example makes use of the class SHGrid, which is the counterpart to SHCoeffs. Let's start by making an arbitrary mask:
End of explanation
"""
grid_window = SHGrid.from_array(window.astype(np.float64)[::-1]) # DH grid starts at north-pole
grid_window.plot()
coeffs_window = grid_window.expand()
coeffs_window.plot_spectrum(unit='per_dlogl')
power_window = coeffs_window.spectrum()
"""
Explanation: Next, generate an SHGrid instance from the input array, plot the grid, expand it in spherical harmonics, and calculate and plot the power spectrum. Note that the first element of the grid corresponds to 0 degrees longitude and 90 degrees north.
End of explanation
"""
grid_local = SHGrid.from_array(grid_global.data * grid_window.data)
grid_local.plot()
coeffs_local = grid_local.expand()
coeffs_local.plot_spectrum(unit='per_dlogl')
power_local = coeffs_local.spectrum()
"""
Explanation: Multiply the random model with the window
Multiplication of the global model with the window function localizes the data, and its expansion in spherical harmonics gives a local power spectrum estimate. The interaction of the window function with the random model distorts the spectrum. In particular, the ouput spectrum at degree l is influenced by the input spherical harmonic degrees from l - Lwin to l + Lwin, where Lwin in the spherical harmonic bandwidth of the windowing function.
Let's multiply the data by the window, expand the result in spherical harmonics, and plot the resulting localized power spectrum:
End of explanation
"""
from pyshtools.spectralanalysis import SHMTCouplingMatrix
coupling_matrix = SHMTCouplingMatrix(lmax, power_window.reshape(nl, 1))
fig, (col1, col2) = plt.subplots(1, 2, figsize=(10, 3))
col1.imshow(coupling_matrix, cmap='viridis')
col1.set(xlabel='input degree l', ylabel='output degree l', xlim=(-0.5, 10.5), ylim=(10.5, -0.5),
title='zoomed coupling matrix');
col2.imshow(coupling_matrix, cmap='viridis')
col2.set(xlabel='input degree l', ylabel='output degree l', title='full coupling matrix');
"""
Explanation: Compute the coupling matrix
The expected distortion of the global spectrum due to the windowing operation can be described by a coupling matrix under the assumption that the model is isotropic and stationary. This means that the local (expected) spectra are independent of the position and orientation of the windows. In this case, the expected local spectrum depends only on the power spectrum of the global data and window.
The coupling matrix describes how input power maps to output power. At high degrees, it is approximately symmetric around the diagonal and resembles a convolution in spherical harmonic degree l. At low degrees, however, input power is spread more unevenly into output power. For example, input degree 0 maps preferentially to output degree 4, which is off-diagonal. This is a direct consquence of describing a 2D multiplication with coupling of a single dimension parameter l. A similar bias can be seen when describing coupling in 2D Fourier space between coefficients with similar |k|.
Here, we calculate and plot the couplting matrix using the function SHMTCouplingMatrix:
End of explanation
"""
from pyshtools.spectralanalysis import SHBias
power_local2 = np.dot(coupling_matrix, power)
power_local3 = SHBias(power_window, power)
fig, ax = plt.subplots(1, 1, figsize=(10, 5))
ax.plot(ls, power_global, label='measured global')
ax.plot(ls, power_local, label='measured local')
ax.plot(power_local2, label='predicted local (CouplingMatrix)')
ax.plot(power_local3, 'x', label='predicted local (SHBias)')
ax.legend(loc=3)
ax.set_xscale('log', basex=2)
ax.set_yscale('log', basey=2)
"""
Explanation: Predict the biased spectrum
Finally, the coupling matrix can be used to predict the spectrum of the windowed function. SHTOOLS provides also the functions SHBiasK and SHBias that can compute directly the biased spectrum without explicitely computing the coupling matrix. Under some assumptions, the coupling matrix can be inverted allowing the global input power spectrum to be estimated from the localized spectrum. This has to be done with care because as a smoothing operation, windowing leads to loss of information (less resolution in degree l space) and has to be regularized to stabilize the inversion.
Here. we compute the biased spectrum using two methods, mutliplication of the global spectrum by the coupling matrix, and the function SHBias, and then plot the results.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.