repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
relopezbriega/mi-python-blog
content/notebooks/MachineLearningOverfitting.ipynb
gpl-2.0
[ "Machine Learning con Python - Sobreajuste\nEsta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Matemáticas, Analisis de datos y Python. El contenido esta bajo la licencia BSD.\n<img alt=\"Machine Learning\" title=\"Machine Learning\" src=\"https://relopezbriega.github.io/images/machine-learning.jpg\">\nIntroducción\nUno de los conceptos más importantes en Machine Learning es el overfitting o sobreajuste del modelo. Comprender como un modelo se ajusta a los datos es muy importante para entender las causas de baja precisión en las predicciones. Un modelo va a estar sobreajustado cuando vemos que se desempeña bien con los datos de entrenamiento, pero su precisión es notablemente más baja con los datos de evaluación; esto se debe a que el modelo ha memorizado los datos que ha visto y no pudo generalizar las reglas para predecir los datos que no ha visto. De aquí también la importancia de siempre contar con dos conjuntos de datos distintos, uno para entrenar el modelo y otro para evaluar su precisión; ya que si utilizamos el mismo dataset para las dos tareas, no tendríamos forma de determinar como el modelo se comporta con datos que nunca ha visto.\n¿Cómo reconocer el sobreajuste?\nEn líneas generales el sobreajuste va a estar relacionado con la complejidad del modelo, mientras más complejidad le agreguemos, mayor va a ser la tendencia a sobreajustarse a los datos, ya que va a contar con mayor flexibilidad para realizar las predicciones y puede ser que los patrones que encuentre estén relacionados con el ruido (pequeños errores aleatorios) en los datos y no con la verdadera señal o relación subyacente. \nNo existe una regla general para establecer cual es el nivel ideal de complejidad que le podemos otorgar a nuestro modelo sin caer en el sobreajuste; pero podemos valernos de algunas herramientas analíticas para intentar entender como el modelo se ajusta a los datos y reconocer el sobreajuste. Veamos un ejemplo.\nÁrboles de Decisión y sobreajuste\nLos Árboles de Decisión pueden ser muchas veces una herramienta muy precisa, pero también con mucha tendencia al sobreajuste. Para construir estos modelos aplicamos un procedimiento recursivo para encontrar los atributos que nos proporcionan más información sobre distintos subconjuntos de datos, cada vez más pequeños. Si aplicamos este procedimiento en forma reiterada, eventualmente podemos llegar a un árbol en el que cada hoja tenga una sola instancia de nuestra variable objetivo a clasificar. En este caso extremo, el Árbol de Decisión va a tener una pobre generalización y estar bastante sobreajustado; ya que cada instancia de los datos de entrenamiento va a encontrar el camino que lo lleve eventualmente a la hoja que lo contiene, alcanzando así una precisión del 100% con los datos de entrenamiento. Veamos un ejemplo sencillo con la ayuda de Python.", "# <!-- collapse=True -->\n# Importando las librerías que vamos a utilizar\nimport pandas as pd\nimport numpy as np \nimport matplotlib.pyplot as plt \nimport seaborn as sns \nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.datasets import make_classification\nfrom sklearn.svm import SVC\nfrom sklearn.tree import DecisionTreeClassifier\nimport random; random.seed(1982)\n\n# graficos incrustados\n%matplotlib inline\n\n# parametros esteticos de seaborn\nsns.set_palette(\"deep\", desat=.6)\nsns.set_context(rc={\"figure.figsize\": (8, 4)})\n\n# Ejemplo en python - árboles de decisión\n# dummy data con 100 atributos y 2 clases\nX, y = make_classification(10000, 100, n_informative=3, n_classes=2,\n random_state=1982)\n\n# separ los datos en train y eval\nx_train, x_eval, y_train, y_eval = train_test_split(X, y, test_size=0.35, \n train_size=0.65,\n random_state=1982)\n\n# creando el modelo sin control de profundidad, va a continuar hasta\n# que todas las hojas sean puras\narbol = DecisionTreeClassifier(criterion='entropy')\n\n# Ajustando el modelo\narbol.fit(x_train, y_train)\n\n# precisión del modelo en datos de entrenamiento.\nprint(\"precisión entranamiento: {0: .2f}\".format(\n arbol.score(x_train, y_train)))", "Logramos una precisión del 100 %, increíble, este modelo no se equivoca! deberíamos utilizarlo para jugar a la lotería y ver si ganamos algunos millones; o tal vez, no?. Veamos como se comporta con los datos de evaluación.", "# precisión del modelo en datos de evaluación.\nprint(\"precisión evaluación: {0: .2f}\".format(\n arbol.score(x_eval, y_eval)))", "Ah, ahora nuestro modelo ya no se muestra tan preciso, esto se debe a que seguramente esta sobreajustado, ya que dejamos crecer el árbol hasta que cada hoja estuviera pura (es decir que solo contenga datos de una sola de las clases a predecir). Una alternativa para reducir el sobreajuste y ver si podemos lograr que generalice mejor y por tanto tenga más precisión para datos nunca vistos, es tratar de reducir la complejidad del modelo por medio de controlar la profundidad que puede alcanzar el Árbol de Decisión.", "# profundidad del arbol de decisión.\narbol.tree_.max_depth", "Este caso nuestro modelo tiene una profundidad de 22 nodos; veamos si reduciendo esa cantidad podemos mejorar la precisión en los datos de evaluación. Por ejemplo, pongamos un máximo de profundidad de tan solo 5 nodos.", "# modelo dos, con control de profundiad de 5 nodos\narbol2 = DecisionTreeClassifier(criterion='entropy', max_depth=5)\n\n# Ajustando el modelo\narbol2.fit(x_train, y_train)\n\n# precisión del modelo en datos de entrenamiento.\nprint(\"precisión entranamiento: {0: .2f}\".format(\n arbol2.score(x_train, y_train)))", "Ahora podemos ver que ya no tenemos un modelo con 100% de precisión en los datos de entrenamiento, sino que la precisión es bastante inferior, 92%, sin embargo si ahora medimos la precisión con los datos de evaluación vemos que la precisión es del 90%, 3 puntos por arriba de lo que habíamos conseguido con el primer modelo que nunca se equivocaba en los datos de entrenamiento.", "# precisión del modelo en datos de evaluación.\nprint(\"precisión evaluación: {0: .2f}\".format(\n arbol2.score(x_eval, y_eval)))", "Esta diferencia se debe a que reducimos la complejidad del modelo para intentar ganar en generalización. También debemos tener en cuenta que si seguimos reduciendo la complejidad, podemos crear un modelo demasiado simple que en vez de estar sobreajustado puede tener un desempeño muy por debajo del que podría tener; podríamos decir que el modelo estaría infraajustado y tendría un alto nivel de sesgo. Para ayudarnos a encontrar el término medio entre la complejidad del modelo y su ajuste a los datos, podemos ayudarnos de herramientas gráficas. Por ejemplo podríamos crear diferentes modelos, con distintos grados de complejidad y luego graficar la precisión en función de la complejidad.", "# Grafico de ajuste del árbol de decisión\ntrain_prec = []\neval_prec = []\nmax_deep_list = list(range(3, 23))\n\nfor deep in max_deep_list:\n arbol3 = DecisionTreeClassifier(criterion='entropy', max_depth=deep)\n arbol3.fit(x_train, y_train)\n train_prec.append(arbol3.score(x_train, y_train))\n eval_prec.append(arbol3.score(x_eval, y_eval))\n\n# graficar los resultados.\nplt.plot(max_deep_list, train_prec, color='r', label='entrenamiento')\nplt.plot(max_deep_list, eval_prec, color='b', label='evaluacion')\nplt.title('Grafico de ajuste arbol de decision')\nplt.legend()\nplt.ylabel('precision')\nplt.xlabel('cant de nodos')\nplt.show()", "El gráfico que acabamos de construir se llama gráfico de ajuste y muestra la precisión del modelo en función de su complejidad. En nuestro ejemplo, podemos ver que el punto con mayor precisión, en los datos de evaluación, lo obtenemos con un nivel de profundidad de aproximadamente 5 nodos; a partir de allí el modelo pierde en generalización y comienza a estar sobreajustado. También podemos crear un gráfico similar con la ayuda de Scikit-learn, utilizando validation_curve.", "# utilizando validation curve de sklearn\nfrom sklearn.learning_curve import validation_curve\n\ntrain_prec, eval_prec = validation_curve(estimator=arbol, X=x_train,\n y=y_train, param_name='max_depth',\n param_range=max_deep_list, cv=5)\n\ntrain_mean = np.mean(train_prec, axis=1)\ntrain_std = np.std(train_prec, axis=1)\ntest_mean = np.mean(eval_prec, axis=1)\ntest_std = np.std(eval_prec, axis=1)\n\n# graficando las curvas\nplt.plot(max_deep_list, train_mean, color='r', marker='o', markersize=5,\n label='entrenamiento')\nplt.fill_between(max_deep_list, train_mean + train_std, \n train_mean - train_std, alpha=0.15, color='r')\nplt.plot(max_deep_list, test_mean, color='b', linestyle='--', \n marker='s', markersize=5, label='evaluacion')\nplt.fill_between(max_deep_list, test_mean + test_std, \n test_mean - test_std, alpha=0.15, color='b')\nplt.grid()\nplt.legend(loc='center right')\nplt.xlabel('Cant de nodos')\nplt.ylabel('Precision')\nplt.show()", "En este gráfico, también podemos ver que nuestro modelo tiene bastante varianza, representada por el área esfumada.\nMétodos para reducir el Sobreajuste\nAlgunas de las técnicas que podemos utilizar para reducir el Sobreajuste, son:\n\nUtilizar validación cruzada.\nRecolectar más datos.\nIntroducir una penalización a la complejidad con alguna técnica de regularización.\nOptimizar los parámetros del modelo con grid search.\nReducir la dimensión de los datos.\nAplicar técnicas de selección de atributos.\nUtilizar modelos ensamblados.\n\nVeamos algunos ejemplos.\nValidación cruzada\nLa validación cruzada se inicia mediante el fraccionamiento de un conjunto de datos en un número $k$ de particiones (generalmente entre 5 y 10) llamadas pliegues. La validación cruzada luego itera entre los datos de evaluación y entrenamiento $k$ veces, de un modo particular. En cada iteración de la validación cruzada, un pliegue diferente se elige como los datos de evaluación. En esta iteración, los otros pliegues $k-1$ se combinan para formar los datos de entrenamiento. Por lo tanto, en cada iteración tenemos $(k-1) / k$ de los datos utilizados para el entrenamiento y $1 / k$ utilizado para la evaluación.\nCada iteración produce un modelo, y por lo tanto una estimación del rendimiento de la generalización, por ejemplo, una estimación de la precisión. Una vez finalizada la validación cruzada, todos los ejemplos se han utilizado sólo una vez para evaluar pero $k -1$ veces para entrenar. En este punto tenemos estimaciones de rendimiento de todos los pliegues y podemos calcular la media y la desviación estándar de la precisión del modelo. Veamos un ejemplo\n<img alt=\"Validacion cruzada\" title=\"Validacion cruzada\" src=\"https://relopezbriega.github.io/images/validacion_cruzada.png\">", "# Ejemplo cross-validation\nfrom sklearn import cross_validation\n\n# creando pliegues\nkpliegues = cross_validation.StratifiedKFold(y=y_train, n_folds=10,\n random_state=2016)\n# iterando entre los plieges\nprecision = []\nfor k, (train, test) in enumerate(kpliegues):\n arbol2.fit(x_train[train], y_train[train]) \n score = arbol2.score(x_train[test], y_train[test])\n precision.append(score)\n print('Pliegue: {0:}, Dist Clase: {1:}, Prec: {2:.3f}'.format(k+1,\n np.bincount(y_train[train]), score))\n\n# imprimir promedio y desvio estandar\nprint('Precision promedio: {0: .3f} +/- {1: .3f}'.format(np.mean(precision),\n np.std(precision)))", "En este ejemplo, utilizamos el <a href=\"https://es.wikipedia.org/wiki/Iterador_(patr%C3%B3n_de_dise%C3%B1o)\">iterador</a> StratifiedKFold que nos proporciona Scikit-learn. Este <a href=\"https://es.wikipedia.org/wiki/Iterador_(patr%C3%B3n_de_dise%C3%B1o)\">iterador</a> es una versión mejorada de la validación cruzada, ya que cada pliegue va a estar estratificado para mantener las proporciones entre las clases del conjunto de datos original, lo que suele dar mejores estimaciones del sesgo y la varianza del modelo. También podríamos utilizar cross_val_score que ya nos proporciona los resultados de la precisión que tuvo el modelo en cada pliegue.", "# Ejemplo con cross_val_score\nprecision = cross_validation.cross_val_score(estimator=arbol2,\n X=x_train, y=y_train,\n cv=10, n_jobs=-1)\n\nprint('precisiones: {}'.format(precision))\nprint('Precision promedio: {0: .3f} +/- {1: .3f}'.format(np.mean(precision),\n np.std(precision)))", "Más datos y curvas de aprendizaje\nMuchas veces, reducir el Sobreajuste es tan fácil como conseguir más datos, dame más datos y te predeciré el futuro!. Aunque en la vida real nunca es una tarea tan sencilla conseguir más datos. Otra herramienta analítica que nos ayuda a entender como reducimos el Sobreajuste con la ayuda de más datos, son las curvas de aprendizaje, las cuales grafican la precisión en función del tamaño de los datos de entrenamiento. Veamos como podemos graficarlas con la ayuda de Python.\n<img alt=\"Curva de aprendizaje\" title=\"Curva de aprendizaje\" src=\"https://relopezbriega.github.io/images/curva_aprendizaje.png\" width=\"600px\" height=\"600px\" >", "# Ejemplo Curvas de aprendizaje\nfrom sklearn.learning_curve import learning_curve\n\ntrain_sizes, train_scores, test_scores = learning_curve(estimator=arbol2,\n X=x_train, y=y_train, \n train_sizes=np.linspace(0.1, 1.0, 10), cv=10,\n n_jobs=-1)\n\ntrain_mean = np.mean(train_scores, axis=1)\ntrain_std = np.std(train_scores, axis=1)\ntest_mean = np.mean(test_scores, axis=1)\ntest_std = np.std(test_scores, axis=1)\n\n# graficando las curvas\nplt.plot(train_sizes, train_mean, color='r', marker='o', markersize=5,\n label='entrenamiento')\nplt.fill_between(train_sizes, train_mean + train_std, \n train_mean - train_std, alpha=0.15, color='r')\nplt.plot(train_sizes, test_mean, color='b', linestyle='--', \n marker='s', markersize=5, label='evaluacion')\nplt.fill_between(train_sizes, test_mean + test_std, \n test_mean - test_std, alpha=0.15, color='b')\nplt.grid()\nplt.title('Curva de aprendizaje')\nplt.legend(loc='upper right')\nplt.xlabel('Cant de ejemplos de entrenamiento')\nplt.ylabel('Precision')\nplt.show()", "En este gráfico podemos ver claramente como con pocos datos la precisión entre los datos de entrenamiento y los de evaluación son muy distintas y luego a medida que la cantidad de datos va aumentando, el modelo puede generalizar mucho mejor y las precisiones se comienzan a emparejar. Este gráfico también puede ser importante a la hora de decidir invertir en la obtención de más datos, ya que por ejemplo nos indica que a partir las 2500 muestras, el modelo ya no gana mucha más precisión a pesar de obtener más datos.\nOptimización de parámetros con Grid Search\nLa mayoría de los modelos de Machine Learning cuentan con varios parámetros para ajustar su comportamiento, por lo tanto otra alternativa que tenemos para reducir el Sobreajuste es optimizar estos parámetros por medio de un proceso conocido como grid search e intentar encontrar la combinación ideal que nos proporcione mayor precisión. El enfoque que utiliza grid search es bastante simple, se trata de una búsqueda exhaustiva por el paradigma de fuerza bruta en el que se especifica una lista de valores para diferentes parámetros, y la computadora evalúa el rendimiento del modelo para cada combinación de éstos parámetros para obtener el conjunto óptimo que nos brinda el mayor rendimiento. \nVeamos un ejemplo utilizando un modelo de SVM o Máquinas de vectores de soporte, la idea va a ser optimizar los parámetros gamma y C de este modelo. El parámetro gamma define cuan lejos llega la influencia de un solo ejemplo de entrenamiento, con valores bajos que significan \"lejos\" y los valores altos significan \"cerca\". El parámetro C es el que establece la penalización por error en la clasificación un valor bajo de este parámetro hace que la superficie de decisión sea más lisa, mientras que un valor alto tiene como objetivo que todos los ejemplos se clasifiquen correctamente, dándole más libertad al modelo para elegir más ejemplos como vectores de soporte. Tengan en cuenta que como todo proceso por fuerza bruta, puede tomar bastante tiempo según la cantidad de parámetros que utilicemos para la optimización.", "# Ejemplo de grid search con SVM.\nfrom sklearn.grid_search import GridSearchCV\n\n# creación del modelo\nsvm = SVC(random_state=1982)\n\n# rango de parametros\nrango_C = np.logspace(-2, 10, 10)\nrango_gamma = np.logspace(-9, 3, 10)\nparam_grid = dict(gamma=rango_gamma, C=rango_C)\n\n# crear grid search\ngs = GridSearchCV(estimator=svm, param_grid=param_grid, scoring='accuracy',\n cv=5,n_jobs=-1)\n\n# comenzar el ajuste\ngs = gs.fit(x_train, y_train)\n\n# imprimir resultados\nprint(gs.best_score_)\nprint(gs.best_params_)\n\n# utilizando el mejor modelo\nmejor_modelo = gs.best_estimator_\nmejor_modelo.fit(x_train, y_train)\nprint('Precisión: {0:.3f}'.format(mejor_modelo.score(x_eval, y_eval)))", "En este ejemplo, primero utilizamos el objeto GridSearchCV que nos permite realizar grid search junto con validación cruzada, luego comenzamos a ajustar el modelo con las diferentes combinaciones de los valores de los parámetros gamma y C. Finalmente imprimimos el mejor resultado de precisión y los valores de los parámetros que utilizamos para obtenerlos; por último utilizamos este mejor modelo para realizar las predicciones con los datos de evaluación. Podemos ver que la precisión que obtuvimos con los datos de evaluación es casi idéntica a la que nos indicó grid search, lo que indica que el modelo generaliza muy bien.\nAquí termina este artículo, sobre la selección de atributos, pueden visitar el artículo que dedique a ese tema en este link; en cuando a modelos ensamblados y reducción de dimensiones de los datos, espero escribir sobre esos temas en artículos futuros, no se los pierdan!\nGracias por visitar el blog y saludos!\nEste post fue escrito utilizando IPython notebook. Pueden descargar este notebook o ver su version estática en nbviewer." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
simkovic/simkovic.github.io
_ipynb/No Way Anova - Logistic Regression truimphs Anova.ipynb
mit
[ "The best argument against Anova is to show how the analysis will look like if we used parameter estimation instead. With complex experimental design this means that we will use regression. Most psychologists think of linear regression. However, the approach extends to general linear models of the sort $y=f(b_0+b_1\\cdot x_1 + b_2\\cdot x_2 \\dots)$. Linear regression is just a special case when $f(\\cdot)$ is the identity function $y=x$. \nWhen the outcome variable takes binary values $0,1$ then it is better to use logistic regression than the linear regression. Here the link function is $f(x)= 1/(1+e^{-x})$ is the logistic function. As with any kind of data the default approach in psychology research is to analyze binary outcomes with Anova. Let's see what are the consequences.\nIn our first demonstration we will take a look at a study by Mussel and colleagues published in JDM.\nThe study investigated whether a smiling/angry/neutral face influences collaboration in an ultimatum game (i.e. prisoner's dilemma with no iteration). The subjects were shown facial expression of the opponent avatar and the amount of money he offered. Then they were asked whether they wish to collaborate. The oppenet could propose the share a fraction from the total of 14 cents. The offers ranged from 7c (overly fair) to 1c (unfair) in 1c steps and subjects decided whether they accept the offer or not. The authors also varied whether the face was male or female. The combination of these factors gives us 2x3x7=42 stimuli. Each of 1326 subjects was shown all 42 stimuli in random order and either collaborated (1) or not (0). Trial ended if subjects failed to respond within a 3 second time limit. In the data set these trials were noted as missing values. Next, authors replaced missing values with subject-wise averages. They then analysed the data with a repeated measures Anova with two factors: fairness (7 levels) and face expression (3 levels). As the high N suggests both main-effects and their interaction were significant. \nAlso all post-hoc comparisons were significant (p<.05, bonf. corrected). Smiling conditions had higher collaboration rate than neutral and neutral was in turn higher than angry. When subjects were offered more money they collaborated more. So what can we conclude? Not much. Smiling face increases the offer acceptance. More generous offer does also increase acceptance. No clue where the interaction comes from. \nWith N=1326 the $p$ values are almost entirely driven by the large sample size. But of course the magnitude of the effect matters. If the collaboration rate is 70 % for smiling faces and 40% for neutral than this is notable. But if the rates are 51 and 50% respectively we don't really care. If the collaboration rates were 95% and 90% we would consider the study inconclusive due to ceiling effects. $p$ values are inherently incapable of providing this information. We need effect size estimates.\nWe are fortunate. Mussel et al. are good academic citizens and report the standardized effect size. \n$$ \\eta_{\\mathrm{face}}=.1 $$\n$$ \\eta_{\\mathrm{money}}=.39 $$\n$$ \\eta_{\\mathrm{money}\\times \\mathrm{face}}=.01 $$\nAre we smarter now? How should we interpret these effect sizes? Is $ \\eta_{\\mathrm{face}}=.1 $ more like the 70%-40% difference or more like the 51%-50% difference? Presumably, the last question is not what we are supposed to ask. Rather, we need to survey the decision making literature to see what are the usual effect sizes and take these as a benchmark for comparison.. Taking the standards from Fritz et al. (2011) we would say that the facial expression has small effect while money has medium effect on acceptance. The interaction shows minuscule effect size and should be presumably discared. \nOne difficulty with this interpretation is that the sums of squares on which $\\eta$ is based depend on the variance of the predictors. The effect size could be improved by using fewer money offer levels. Similar, the addition of face gender factor influences each of the effect size estimates above, because it presumably increases the error variance. As such effect sizes can't be compared across studies, not even across replications if these aren't exact. \nThe only reason why most of the papers with of the Anova analyses are worth reading after all, is that they include a figure with plotted group averages. Mussel et al. show mercy and give us the following graph. \n<image src=\"http://journal.sjdm.org/12/12817/jdm12817002.png\"> </image>\nOk. Now we see it. The interaction is due to ceiling/floor effects for large and small sums respectively. We are really interested in what happens in the middle. Here, the extra smile increases (compared to neutral face) the acceptance by 5-10%. Angry face shows an effect of similar magnitude but goes in the opposite direction. \nWhy is it not possible to do this kind of analysis formally? It is. But we need to something else than Anova.\nWe formulate a regression model with acceptance ($\\mathrm{coop}$) as outcome and face expression ($\\mathrm{face}$) and offered money sum ($\\mathrm{fair}$) as predictors.\n$$\\mathrm{coop}{i,j} \\sim \\mathrm{Bern}(\\pi{i,j})$$\n$$\\pi_{i,j} = \\mathrm{logit}^{-1}(\\alpha_{\\mathrm{face}[i,j]}+\\beta_{\\mathrm{face}[i,j]}\\mathrm{fair}[i,j])(1-\\gamma_{\\mathrm{face}[i,j]}-\\delta_{\\mathrm{face}[i,j]})+\\gamma_{\\mathrm{face}[i,j]}$$\nWe enter money sum as continuous predictor of acceptance at each trial $j$ for each subject $i$. We fit a separate model for each type of face expression and for each subject. $\\mathrm{face}[i,j]$ is the index of the face expression which subject $i$ saw at trial $j$. The parameters $\\gamma$ and $\\delta$ determine the level (of acceptance) where the logit function becomes flat. Without this addition the logit curve would become flat at $[0,1]$. We can see from the figure from the paper that this is not the case. \nLet's look at how the logistic function works by playing around with its parameters.", "from scipy.stats import scoreatpercentile as sap\nclr=(0.2, 0.5, 0.6)\n%pylab inline\nplt.figure(figsize=(6,4))\nplt.subplot(2,2,1);plt.title('alpha')\nx=np.arange(1,9,0.1)\nfor a in np.arange(2,9).tolist():\n y=1/(1+exp(a-1*x))*1+0\n plt.plot(x,y,'b')\nplt.grid(b=False,axis='x');plt.ylim([0,1]);plt.xlim([1,8])\nplt.subplot(2,2,2);plt.title('beta')\nfor b in np.arange(-1,3,0.5).tolist():\n y=1/(1+exp(4.5*2**b-2**b*x))*1+0\n plt.plot(x,y,'b')\nplt.grid(b=False,axis='x');plt.ylim([0,1]);plt.xlim([1,8])\nplt.subplot(2,2,3);plt.title('gamma')\nfor c in np.arange(0,0.8,0.1).tolist():\n y=1/(1+exp(4.5-x))*(1-c)+c\n plt.plot(x,y,'b'); \nplt.grid(b=False,axis='x');plt.ylim([0,1]);plt.xlim([1,8])\nplt.subplot(2,2,4);plt.title('delta')\nfor d in np.arange(0,0.8,0.1).tolist():\n y=1/(1+exp(4.5-x))*(1-0.1-d)+0.1\n plt.plot(x,y,'b'); \nplt.grid(b=False,axis='x');plt.ylim([0,1]);plt.xlim([1,8]);", "We see that $\\alpha$ shifts the function on the x axis. $\\beta$ alters the steepness of the function. As already mention $\\gamma$ and $\\delta$ determine the floor and the ceiling of the function.\nNext let's perform the analysis. We first load the data.", "from urllib import urlopen\nf=urlopen('http://journal.sjdm.org/12/12817/data.csv')\nD=np.loadtxt(f,skiprows=3,delimiter=',')[:,7:]\nf.close()\nD.shape\n\n# predictors\nvfair=np.array(([range(1,8)]*6)).flatten() # in cents\nvmale=np.ones(42,dtype=int); vmale[21:]=0\nvface=np.int32(np.concatenate([np.zeros(7),np.ones(7),np.ones(7)*2]*2))\n# anova format\nsid=[];face=[];fair=[]\nfor i in range(D.shape[0]):\n for j in range(D.shape[1]):\n sid.append(i)\n #face.append(['angry','neutral','smile'][vface[j]])\n face.append(vface[j])\n fair.append(vfair[j])\ncoop=D.flatten()\nsid=np.array(sid)\nface=np.array(face)\nfair=np.array(fair)\nassert np.all(coop[:42]==D[0,:])\nprint coop.size,len(sid),len(face),len(fair)\nprint D.shape", "It is good to do some manual fitting to see just how the logistic curve behaves but also to assure ourselves that the we can get a shape similar the pattern of our data.", "D[D==2]=np.nan\nR=np.zeros((3,7))\nfor i in np.unique(vface).tolist():\n for j in np.unique(vfair).tolist():\n sel=np.logical_and(i==vface,j==vfair)\n R[i,j-1]=np.nansum(D[:,sel])/(~np.isnan(D[:,sel])).sum()\nfor i in range(3):\n y=1/(1+exp(4.5-1.2*np.arange(1,8)))*0.5+[0.4,0.44,0.47][i]\n plt.plot(range(1,8),y,':',color=['b','r','g'][i])\n plt.plot(range(1,8),R[i,:],color=['b','r','g'][i])\nplt.legend(['Data Angry','Model Angry','Data Neutral',\n 'Model Neutral','Data Smile','Model Smile'],loc=4);", "Above I fitted the logistic curve to the data. I got the parameter values by several iterations of trial-and-error. We want to obtain precise estimates and also we wish to get an idea about the uncertainty of the estimate. We implement the model in STAN.", "import pystan\n\nmodel = \"\"\"\ndata {\n int<lower=0> N;\n int<lower=0,upper=1> coop[N]; // acceptance\n int<lower=0,upper=8> fair[N]; // fairness\n int<lower=1,upper=3> face[N]; // face expression\n}\nparameters {\n real<lower=-20,upper=20> alpha[3];\n real<lower=0,upper=10> beta[3];\n simplex[3] gamm[3];\n\n}\ntransformed parameters{\n vector[N] x;\n for (i in 1:N)\n x[i]<-inv_logit(alpha[face[i]]+beta[face[i]]*fair[i])\n *gamm[face[i]][3]+gamm[face[i]][1]; \n}\nmodel {\n coop ~ bernoulli(x);\n}\n\"\"\"\n#inpar=[{'alpha':[-4.5,-4.5,-4.5],'beta':[1.2,1.2,1.2],\n# 'gamma':[0.4,0.44,0.47],'delta':[0.1,0.06,0.03]}]*4\nsm = pystan.StanModel(model_code=model)", "Run it!", "dat = {'N': coop.size,'coop':np.int32(coop),'fair':fair,'face':face+1,'sid':sid}\nseed=np.random.randint(2**16)\nfit=sm.sampling(data=dat,iter=6000,chains=4,thin=5,warmup=2000,n_jobs=4,seed=seed)\nprint pystan.misc._print_stanfit(fit,pars=['alpha','beta','gamm'],digits_summary=2)\nw= fit.extract()\nnp.save('alpha',w['alpha'])\nnp.save('gamm',w['gamm'])\nnp.save('beta',w['beta'])\ndel w\ndel fit", "Here are the results.", "#w=fit.summary(pars=['alpha','beta','gamm'])\n#np.save('logregSummary.fit',w)\n#w=np.load('logregSummary.fit.npy')\n#w=w.tolist()\na=np.load('m1alpha.npy')\nb=np.load('m1beta.npy')\ng=np.load('m1gamm.npy')[:,:,0]\nd=np.load('m1gamm.npy')[:,:,1]\n#D[D==2]=np.nan\nfor i in range(3):\n x=np.linspace(1,7,101)\n y=1/(1+exp(-np.median(a[:,i])-np.median(b[:,i])*x))*np.median(1-g[:,i]-d[:,i])+np.median(g[:,i])\n plt.plot(x,y,':',color=['b','r','g'][i])\n plt.plot(range(1,8),R[i,:],color=['b','r','g'][i])\nplt.legend(['Data Angry','Model Angry','Data Neutral',\n 'Model Neutral','Data Smile','Model Smile'],loc=4);\n#for j in range(lp.size): print '%.3f [%.3f, %.3f]'%(prs[j],lp[j],up[j]) ", "The model fits quite well. We now look at the estimated values for different face conditions.", "D=np.concatenate([a,b,g,1-d,1-g-d],1)\nprint D.shape\nfor n in range(D.shape[1]):\n plt.subplot(2,3,[1,2,4,5,6][n/3])\n k=n%3\n plt.plot([k,k],[sap(D[:,n],2.5),sap(D[:,n],97.5)],color=clr)\n plt.plot([k,k],[sap(D[:,n],25),sap(D[:,n],75)],color=clr,lw=3,solid_capstyle='round')\n plt.plot([k],[np.median(D[:,n])],mfc=clr,mec=clr,ms=8,marker='_',mew=2)\n plt.xlim([-0.5,2.5])\n plt.grid(b=False,axis='x')\n plt.title(['alpha','beta','gamma','delta','1-gamma-delta'][n/3])\n plt.gca().set_xticks([0,1,2])\n plt.gca().set_xticklabels(['angry','neutral','smile'])", "The estimates show what we already more-or-less inferred from the graph. The 95% interval for $\\alpha$ and $\\beta$ coefficients are overlapping and we should consider model with the same horizontal shift and steepness for each of the face conditions. We see that the $\\gamma$ and $\\delta$ vary between the conditions. To better understand what is happening consider the width of the acceptance band in each condition given by $1-\\gamma-\\delta$ shown in the right bottom panel. From the figure it looks like all three curves occupy a band of the same width. The estimation confirms this for the case of neutral and smile condition whose estimates overlap almost perfectly. In the angry condition it is not clear where the bottom floor of the logit curve is located. The curve is still linear for lower offers. This means that a) the $1-\\gamma-\\delta$ estimate is larger in angry condition and b) the estimate is more uncertain. We can reasonably argue that $1-\\gamma-\\delta$ should be equal across conditions and that discrepant estimate for angry condition is due to error or some strange money-face interaction which we are not interested in. We end up with the following model.\n$$\\mathrm{coop}{i,j} \\sim \\mathrm{Bern}(\\pi{i,j})$$\n$$\\pi_{i,j} =\\mathrm{logit}^{-1}(\\alpha+\\beta\\cdot\\mathrm{fair}[i,j])\\cdot \\nu+\\gamma_{\\mathrm{face}[i,j]}$$", "import pystan\n\nmodel = \"\"\"\ndata {\n int<lower=0> N;\n int<lower=0,upper=1> coop[N]; // acceptance\n int<lower=0,upper=8> fair[N]; // fairness\n int<lower=1,upper=3> face[N]; // face expression\n}\nparameters {\n real<lower=-20,upper=20> alpha;\n real<lower=0,upper=10> beta;\n real<lower=0,upper=1> gamm[3];\n real<lower=0,upper=1> delt[3];\n \n\n}\ntransformed parameters{\n vector[N] x;\n vector[3] gamma[3];\n for (i in 1:3){\n gamma[i][1]<-gamm[i];\n gamma[i][2]<-delt[i];\n gamma[i][3]<-1-gamm[i]-delt[i];\n }\n for (i in 1:N)\n x[i]<-inv_logit(alpha+beta*fair[i])\n *gamma[face[i]][3]+gamma[face[i]][1]; \n}\nmodel {\n coop ~ bernoulli(x);\n}\n\"\"\"\nsm = pystan.StanModel(model_code=model)\n\ndat = {'N': coop.size,'coop':np.int32(coop),'fair':fair,'face':face+1,'sid':sid}\nseed=np.random.randint(2**16)\nfit=sm.sampling(data=dat,iter=5000,chains=4,thin=5,warmup=2000,n_jobs=4,seed=seed)\noutpars=['alpha','beta','gamm','delt']\nprint pystan.misc._print_stanfit(fit,pars=outpars,digits_summary=2)\nw= fit.extract()\nfor op in outpars: np.save(op,w[op])\ndel w\ndel fit\n\na=np.load('alpha.npy')\nb=np.load('beta.npy')\ng=np.load('gamm.npy')\nd=np.load('delt.npy')\n#D[D==2]=np.nan\nfor i in range(3):\n x=np.linspace(1,7,101)\n y=1/(1+exp(-np.median(a)-np.median(b)*x))*np.median(1-g[:,i]-d[:,i])+np.median(g[:,i])\n plt.plot(x,y,':',color=['b','r','g'][i])\n plt.plot(range(1,8),R[i,:],color=['b','r','g'][i])\nplt.legend(['Data Angry','Model Angry','Data Neutral',\n 'Model Neutral','Data Smile','Model Smile'],loc=4);\n\nfrom scipy.stats import scoreatpercentile as sap\n\nprint g.T.shape, np.atleast_2d(d).shape\nD=np.concatenate([np.atleast_2d(a),np.atleast_2d(b),np.atleast_2d(b),g.T,1-d.T,1-g.T-d.T],0).T\nprint D.shape\nfor n in range(D.shape[1]):\n plt.subplot(2,3,[1,2,4,5,6][n/3])\n k=n%3\n plt.plot([k,k],[sap(D[:,n],2.5),sap(D[:,n],97.5)],color=clr)\n plt.plot([k,k],[sap(D[:,n],25),sap(D[:,n],75)],color=clr,lw=3,solid_capstyle='round')\n plt.plot([k],[np.median(D[:,n])],mfc=clr,mec=clr,ms=8,marker='_',mew=2)\n plt.xlim([-0.5,2.5])\n plt.grid(b=False,axis='x')\n plt.title(['alpha-beta','gamma','delta','1-gamma-delta'][n/3])\n plt.gca().set_xticks([0,1,2])\n plt.gca().set_xticklabels(['angry','neutral','smile'])", "Furthermore, we are concerned about the fact the the comparison across conditions is done within-subject and that the observed values are not independent. We extend the model by fitting separate logistic model to each subject. In particular, we estimate a separate $\\gamma$ parameter for each subject i.e. $\\gamma_{i,\\mathrm{face}[i,j]}$. We use hierarchical prior that pools the estimates across subjects and also takes care of the correlation between conditions.\n$$ \\begin{bmatrix}\n\\gamma_{i,s} \\ \\gamma_{i,n} \\ \\gamma_{i,a}\n\\end{bmatrix}\n\\sim \\mathcal{N} \\Bigg(\n\\begin{bmatrix}\n\\mu_s \\ \\mu_n \\ \\mu_a\n\\end{bmatrix}\n,\\Sigma \\Bigg)$$\nwhere\n$$\n\\Sigma=\n\\begin{pmatrix}\n\\sigma_s^2 & \\sigma_s r_{sn} \\sigma_n & \\sigma_s r_{sa} \\sigma_a \\\n\\sigma_s r_{sn} \\sigma_n & \\sigma_n^2 & \\sigma_n r_{na} \\sigma_a \\\n \\sigma_s r_{sa} \\sigma_a & \\sigma_n r_{na} \\sigma_a & \\sigma_a^2 \\\n\\end{pmatrix}\n$$\nFor each condition we are estimating population mean $\\mu$ and population variance $\\sigma^2$. Furthermore, we estimate correlation $r$ for each pair of conditions. As a consequence the estimate $\\mu$ are not confounded by the correlation.", "import pystan\n\nmodel = \"\"\"\ndata {\n int<lower=0> N;\n int<lower=0> M; // number of subjects\n int sid[N]; // subject identifier\n int<lower=0,upper=1> coop[N]; // acceptance\n int<lower=0,upper=8> fair[N]; // fairness\n int<lower=1,upper=3> face[N]; // face expression\n \n}\nparameters {\n real<lower=-20,upper=20> alpha;\n real<lower=0,upper=10> beta;\n vector<lower=0,upper=1>[3] gamm[M];\n real<lower=0,upper=1> delt;\n vector<lower=0,upper=1>[3] mu;\n vector<lower=0,upper=1>[3] sigma;\n vector<lower=-1,upper=1>[3] r;\n \n\n}\ntransformed parameters{\n vector[N] x;\n vector[3] gammt[3,M];\n matrix[3,3] S;\n for (i in 1:3) S[i,i]<-square(sigma[i]);\n S[1,2]<- sigma[1]*r[1]*sigma[2];S[2,1]<-S[1,2];\n S[1,3]<- sigma[1]*r[2]*sigma[3];S[3,1]<-S[1,3];\n S[2,3]<- sigma[3]*r[3]*sigma[2];S[3,2]<-S[2,3];\n for (m in 1:M){\n for (i in 1:3){\n gammt[i][m][1]<-gamm[m][i];\n gammt[i][m][3]<-delt;\n gammt[i][m][2]<- 1- gammt[i][m][1]-gammt[i][m][3];\n }}\n for (i in 1:N)\n x[i]<-inv_logit(alpha+beta*fair[i])\n *gammt[face[i]][sid[i]][3]+gammt[face[i]][sid[i]][1]; \n}\nmodel {\n for (i in 1:M) gamm[i]~multi_normal(mu,S);\n coop ~ bernoulli(x);\n}\n\"\"\"\nsm = pystan.StanModel(model_code=model)\n\ndat = {'N': coop.size,'coop':np.int32(coop),'fair':fair,'face':face+1,'sid':sid+1,'M':1326}\nseed=np.random.randint(2**16)\nfit=sm.sampling(data=dat,iter=5000,chains=4,thin=5,warmup=2000,n_jobs=4,seed=seed)\noutpars=['alpha','beta','delt','mu','sigma','r']\nprint pystan.misc._print_stanfit(fit,pars=outpars,digits_summary=2)\nw= fit.extract()\nfor op in outpars: np.save(op,w[op])\ndel w\ndel fit", "Once we have found the appropriate model we can look at the contrasts of interest. In our case we are interested in $\\mu_\\mathrm{smile}-\\mu_\\mathrm{neutral}$ and $\\mu_\\mathrm{angry}-\\mu_\\mathrm{neutral}$. The former is TODO while the latter is TODO. This is the effect size that we should be interested in. Note how the estimate goes beyond simple contrast that just computes the mean difference between the angry and neutral condition. The model is a device that allows us to extract the quantity from the data. Our model takes care of missing values, of imballanced groups (due to missing values). It accounts for the ceiling and floor effects. In Anova that assumes linear trends these showed up as a significant correlation. We saw no such interaction. The model also took care of the correlation of subject's performance in different groups. On the other hand it seems rather redundant to estimate the $\\alpha$ and $\\beta$ parameters. In this these did not differ noticably between the conditions but in other context they may provide interesting insight. Curiously even in our context we can find use for them. $\\beta$ expresses the increase in acceptance rate for each unit of money offered. We are mostly interested in the increase in the range between 2c and 6c. Here the curve is approximately linear with slope $\\beta/4$.\nWe can use this fact to ask a following question. The paper Mussel et al. bears the title \"What is the values of a smile\". The implication of the title is that smile has a similar influence on the acceptance rate as a sum of money does. We can reformulate this question in the form of a following counterfactual. What is the sum of money we would need to offer a subject who saw neutral face so that his acceptance rate reaches a level it would have if he saw a smiling face. This quantity is given by $4(\\mu_\\mathrm{smile}-\\mu_\\mathrm{neutral})/\\beta$. The value of the smile is . This result is valid for offers in the range where the logistic function is approximately linear (i.e. 2c and 6c). This is the middle range of the tested values and obviously the range in which the authors were interested in. If we assume that people behave similarly whether the total sum is 14c, 14 EUR, 14 or in fact any sum then we can say that the value of a smile as \\% of the equal share. This quantity is informative. Compare it to the $\\eta$. It doesn't depend on the number of conditions. For instance the estimate of value of a smile is independent on the fact that we included an angry condition in the experiment. The quantity is directly expressed in units we well understand (compare to squared unitless quantity). Finally, the quantity has causal intepretation. This is an important fact to which I will return in my latter posts." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
statsmodels/statsmodels.github.io
v0.13.1/examples/notebooks/generated/robust_models_1.ipynb
bsd-3-clause
[ "M-Estimators for Robust Linear Modeling", "%matplotlib inline\n\nfrom statsmodels.compat import lmap\nimport numpy as np\nfrom scipy import stats\nimport matplotlib.pyplot as plt\n\nimport statsmodels.api as sm", "An M-estimator minimizes the function \n\n$$Q(e_i, \\rho) = \\sum_i~\\rho \\left (\\frac{e_i}{s}\\right )$$\nwhere $\\rho$ is a symmetric function of the residuals \n\nThe effect of $\\rho$ is to reduce the influence of outliers\n$s$ is an estimate of scale. \n\nThe robust estimates $\\hat{\\beta}$ are computed by the iteratively re-weighted least squares algorithm\n\n\nWe have several choices available for the weighting functions to be used", "norms = sm.robust.norms\n\ndef plot_weights(support, weights_func, xlabels, xticks):\n fig = plt.figure(figsize=(12, 8))\n ax = fig.add_subplot(111)\n ax.plot(support, weights_func(support))\n ax.set_xticks(xticks)\n ax.set_xticklabels(xlabels, fontsize=16)\n ax.set_ylim(-0.1, 1.1)\n return ax", "Andrew's Wave", "help(norms.AndrewWave.weights)\n\na = 1.339\nsupport = np.linspace(-np.pi * a, np.pi * a, 100)\nandrew = norms.AndrewWave(a=a)\nplot_weights(\n support, andrew.weights, [\"$-\\pi*a$\", \"0\", \"$\\pi*a$\"], [-np.pi * a, 0, np.pi * a]\n)", "Hampel's 17A", "help(norms.Hampel.weights)\n\nc = 8\nsupport = np.linspace(-3 * c, 3 * c, 1000)\nhampel = norms.Hampel(a=2.0, b=4.0, c=c)\nplot_weights(support, hampel.weights, [\"3*c\", \"0\", \"3*c\"], [-3 * c, 0, 3 * c])", "Huber's t", "help(norms.HuberT.weights)\n\nt = 1.345\nsupport = np.linspace(-3 * t, 3 * t, 1000)\nhuber = norms.HuberT(t=t)\nplot_weights(support, huber.weights, [\"-3*t\", \"0\", \"3*t\"], [-3 * t, 0, 3 * t])", "Least Squares", "help(norms.LeastSquares.weights)\n\nsupport = np.linspace(-3, 3, 1000)\nlst_sq = norms.LeastSquares()\nplot_weights(support, lst_sq.weights, [\"-3\", \"0\", \"3\"], [-3, 0, 3])", "Ramsay's Ea", "help(norms.RamsayE.weights)\n\na = 0.3\nsupport = np.linspace(-3 * a, 3 * a, 1000)\nramsay = norms.RamsayE(a=a)\nplot_weights(support, ramsay.weights, [\"-3*a\", \"0\", \"3*a\"], [-3 * a, 0, 3 * a])", "Trimmed Mean", "help(norms.TrimmedMean.weights)\n\nc = 2\nsupport = np.linspace(-3 * c, 3 * c, 1000)\ntrimmed = norms.TrimmedMean(c=c)\nplot_weights(support, trimmed.weights, [\"-3*c\", \"0\", \"3*c\"], [-3 * c, 0, 3 * c])", "Tukey's Biweight", "help(norms.TukeyBiweight.weights)\n\nc = 4.685\nsupport = np.linspace(-3 * c, 3 * c, 1000)\ntukey = norms.TukeyBiweight(c=c)\nplot_weights(support, tukey.weights, [\"-3*c\", \"0\", \"3*c\"], [-3 * c, 0, 3 * c])", "Scale Estimators\n\nRobust estimates of the location", "x = np.array([1, 2, 3, 4, 500])", "The mean is not a robust estimator of location", "x.mean()", "The median, on the other hand, is a robust estimator with a breakdown point of 50%", "np.median(x)", "Analogously for the scale\nThe standard deviation is not robust", "x.std()", "Median Absolute Deviation\n$$ median_i |X_i - median_j(X_j)|) $$\nStandardized Median Absolute Deviation is a consistent estimator for $\\hat{\\sigma}$\n$$\\hat{\\sigma}=K \\cdot MAD$$\nwhere $K$ depends on the distribution. For the normal distribution for example,\n$$K = \\Phi^{-1}(.75)$$", "stats.norm.ppf(0.75)\n\nprint(x)\n\nsm.robust.scale.mad(x)\n\nnp.array([1, 2, 3, 4, 5.0]).std()", "Another robust estimator of scale is the Interquartile Range (IQR)\n$$\\left(\\hat{X}{0.75} - \\hat{X}{0.25}\\right),$$\nwhere $\\hat{X}_{p}$ is the sample p-th quantile and $K$ depends on the distribution. \nThe standardized IQR, given by $K \\cdot \\text{IQR}$ for\n$$K = \\frac{1}{\\Phi^{-1}(.75) - \\Phi^{-1}(.25)} \\approx 0.74,$$\nis a consistent estimator of the standard deviation for normal data.", "sm.robust.scale.iqr(x)", "The IQR is less robust than the MAD in the sense that it has a lower breakdown point: it can withstand 25\\% outlying observations before being completely ruined, whereas the MAD can withstand 50\\% outlying observations. However, the IQR is better suited for asymmetric distributions.\nYet another robust estimator of scale is the $Q_n$ estimator, introduced in Rousseeuw & Croux (1993), 'Alternatives to the Median Absolute Deviation'. Then $Q_n$ estimator is given by\n$$\nQ_n = K \\left\\lbrace \\vert X_{i} - X_{j}\\vert : i<j\\right\\rbrace_{(h)}\n$$\nwhere $h\\approx (1/4){{n}\\choose{2}}$ and $K$ is a given constant. In words, the $Q_n$ estimator is the normalized $h$-th order statistic of the absolute differences of the data. The normalizing constant $K$ is usually chosen as 2.219144, to make the estimator consistent for the standard deviation in the case of normal data. The $Q_n$ estimator has a 50\\% breakdown point and a 82\\% asymptotic efficiency at the normal distribution, much higher than the 37\\% efficiency of the MAD.", "sm.robust.scale.qn_scale(x)", "The default for Robust Linear Models is MAD\nanother popular choice is Huber's proposal 2", "np.random.seed(12345)\nfat_tails = stats.t(6).rvs(40)\n\nkde = sm.nonparametric.KDEUnivariate(fat_tails)\nkde.fit()\nfig = plt.figure(figsize=(12, 8))\nax = fig.add_subplot(111)\nax.plot(kde.support, kde.density)\n\nprint(fat_tails.mean(), fat_tails.std())\n\nprint(stats.norm.fit(fat_tails))\n\nprint(stats.t.fit(fat_tails, f0=6))\n\nhuber = sm.robust.scale.Huber()\nloc, scale = huber(fat_tails)\nprint(loc, scale)\n\nsm.robust.mad(fat_tails)\n\nsm.robust.mad(fat_tails, c=stats.t(6).ppf(0.75))\n\nsm.robust.scale.mad(fat_tails)", "Duncan's Occupational Prestige data - M-estimation for outliers", "from statsmodels.graphics.api import abline_plot\nfrom statsmodels.formula.api import ols, rlm\n\nprestige = sm.datasets.get_rdataset(\"Duncan\", \"carData\", cache=True).data\n\nprint(prestige.head(10))\n\nfig = plt.figure(figsize=(12, 12))\nax1 = fig.add_subplot(211, xlabel=\"Income\", ylabel=\"Prestige\")\nax1.scatter(prestige.income, prestige.prestige)\nxy_outlier = prestige.loc[\"minister\", [\"income\", \"prestige\"]]\nax1.annotate(\"Minister\", xy_outlier, xy_outlier + 1, fontsize=16)\nax2 = fig.add_subplot(212, xlabel=\"Education\", ylabel=\"Prestige\")\nax2.scatter(prestige.education, prestige.prestige)\n\nols_model = ols(\"prestige ~ income + education\", prestige).fit()\nprint(ols_model.summary())\n\ninfl = ols_model.get_influence()\nstudent = infl.summary_frame()[\"student_resid\"]\nprint(student)\n\nprint(student.loc[np.abs(student) > 2])\n\nprint(infl.summary_frame().loc[\"minister\"])\n\nsidak = ols_model.outlier_test(\"sidak\")\nsidak.sort_values(\"unadj_p\", inplace=True)\nprint(sidak)\n\nfdr = ols_model.outlier_test(\"fdr_bh\")\nfdr.sort_values(\"unadj_p\", inplace=True)\nprint(fdr)\n\nrlm_model = rlm(\"prestige ~ income + education\", prestige).fit()\nprint(rlm_model.summary())\n\nprint(rlm_model.weights)", "Hertzprung Russell data for Star Cluster CYG 0B1 - Leverage Points\n\nData is on the luminosity and temperature of 47 stars in the direction of Cygnus.", "dta = sm.datasets.get_rdataset(\"starsCYG\", \"robustbase\", cache=True).data\n\nfrom matplotlib.patches import Ellipse\n\nfig = plt.figure(figsize=(12, 8))\nax = fig.add_subplot(\n 111,\n xlabel=\"log(Temp)\",\n ylabel=\"log(Light)\",\n title=\"Hertzsprung-Russell Diagram of Star Cluster CYG OB1\",\n)\nax.scatter(*dta.values.T)\n# highlight outliers\ne = Ellipse((3.5, 6), 0.2, 1, alpha=0.25, color=\"r\")\nax.add_patch(e)\nax.annotate(\n \"Red giants\",\n xy=(3.6, 6),\n xytext=(3.8, 6),\n arrowprops=dict(facecolor=\"black\", shrink=0.05, width=2),\n horizontalalignment=\"left\",\n verticalalignment=\"bottom\",\n clip_on=True, # clip to the axes bounding box\n fontsize=16,\n)\n# annotate these with their index\nfor i, row in dta.loc[dta[\"log.Te\"] < 3.8].iterrows():\n ax.annotate(i, row, row + 0.01, fontsize=14)\nxlim, ylim = ax.get_xlim(), ax.get_ylim()\n\nfrom IPython.display import Image\n\nImage(filename=\"star_diagram.png\")\n\ny = dta[\"log.light\"]\nX = sm.add_constant(dta[\"log.Te\"], prepend=True)\nols_model = sm.OLS(y, X).fit()\nabline_plot(model_results=ols_model, ax=ax)\n\nrlm_mod = sm.RLM(y, X, sm.robust.norms.TrimmedMean(0.5)).fit()\nabline_plot(model_results=rlm_mod, ax=ax, color=\"red\")", "Why? Because M-estimators are not robust to leverage points.", "infl = ols_model.get_influence()\n\nh_bar = 2 * (ols_model.df_model + 1) / ols_model.nobs\nhat_diag = infl.summary_frame()[\"hat_diag\"]\nhat_diag.loc[hat_diag > h_bar]\n\nsidak2 = ols_model.outlier_test(\"sidak\")\nsidak2.sort_values(\"unadj_p\", inplace=True)\nprint(sidak2)\n\nfdr2 = ols_model.outlier_test(\"fdr_bh\")\nfdr2.sort_values(\"unadj_p\", inplace=True)\nprint(fdr2)", "Let's delete that line", "l = ax.lines[-1]\nl.remove()\ndel l\n\nweights = np.ones(len(X))\nweights[X[X[\"log.Te\"] < 3.8].index.values - 1] = 0\nwls_model = sm.WLS(y, X, weights=weights).fit()\nabline_plot(model_results=wls_model, ax=ax, color=\"green\")", "MM estimators are good for this type of problem, unfortunately, we do not yet have these yet. \nIt's being worked on, but it gives a good excuse to look at the R cell magics in the notebook.", "yy = y.values[:, None]\nxx = X[\"log.Te\"].values[:, None]", "Note: The R code and the results in this notebook has been converted to markdown so that R is not required to build the documents. The R results in the notebook were computed using R 3.5.1 and robustbase 0.93.\n```ipython\n%load_ext rpy2.ipython\n%R library(robustbase)\n%Rpush yy xx\n%R mod <- lmrob(yy ~ xx);\n%R params <- mod$coefficients;\n%Rpull params\n```\nipython\n%R print(mod)\nCall:\nlmrob(formula = yy ~ xx)\n \\--&gt; method = \"MM\"\nCoefficients:\n(Intercept) xx \n -4.969 2.253", "params = [-4.969387980288108, 2.2531613477892365] # Computed using R\nprint(params[0], params[1])\n\nabline_plot(intercept=params[0], slope=params[1], ax=ax, color=\"red\")", "Exercise: Breakdown points of M-estimator", "np.random.seed(12345)\nnobs = 200\nbeta_true = np.array([3, 1, 2.5, 3, -4])\nX = np.random.uniform(-20, 20, size=(nobs, len(beta_true) - 1))\n# stack a constant in front\nX = sm.add_constant(X, prepend=True) # np.c_[np.ones(nobs), X]\nmc_iter = 500\ncontaminate = 0.25 # percentage of response variables to contaminate\n\nall_betas = []\nfor i in range(mc_iter):\n y = np.dot(X, beta_true) + np.random.normal(size=200)\n random_idx = np.random.randint(0, nobs, size=int(contaminate * nobs))\n y[random_idx] = np.random.uniform(-750, 750)\n beta_hat = sm.RLM(y, X).fit().params\n all_betas.append(beta_hat)\n\nall_betas = np.asarray(all_betas)\nse_loss = lambda x: np.linalg.norm(x, ord=2) ** 2\nse_beta = lmap(se_loss, all_betas - beta_true)", "Squared error loss", "np.array(se_beta).mean()\n\nall_betas.mean(0)\n\nbeta_true\n\nse_loss(all_betas.mean(0) - beta_true)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/fhir
examples/gcp_datalab/notebooks/2_deploy_and_run_ml_model_to_predict_los.ipynb
apache-2.0
[ "<h1> Structured Machine Learning using Tensorflow, Google Cloud Datalab and Cloud ML</h1>\n<hr />\n<b>This notebook demonstrates a process to deploy a ML model to CloudML. It leverages a pre-built machine learning model to predict Length of Stay in ED and inpatient care settings. Finally it runs an inference job on the CloudML Engine to render predictions. This is step 2 of 2.</b>\n<h3>\n<br />\n<ol>\n<li> Setup Environment </li> <br />\n<li> Deploy and Run a ML Model on CloudML </li>\n</ol></h3>\n<hr />\n\n<h2> 1. Setup Environment</h2>\n<ul>\n <li>Initialize environment variables for your environment</li>\n <li>Please change the values of the following before executing rest of the cells in this notebook: <br />\n <b>1. GCP_PROJECT and </b> <br />\n <b>2. GCS_BUCKET </b> <br />\n <b>3. GCS_REGION </b>\n </li>\n</ul>", "import os\nGCP_PROJECT = 'dp-workspace'\nGCS_BUCKET = 'gs://cluster19-bkt'\nGCS_REGION = 'us-central1'\nos.putenv(\"REGION\", GCS_REGION)\nTF_RECORD_SEQEX = GCS_BUCKET+'/synthea/serv/seqex*'\nos.putenv(\"SEQEX_IN_GCS\", TF_RECORD_SEQEX)\nMODEL_PATH = GCS_BUCKET+'/synthea/model/'\nos.putenv(\"MODEL_IN_GCS\", MODEL_PATH+\"*\")\nSAVED_MODEL_PATH = MODEL_PATH + 'export'\nos.putenv(\"SAVED_MODEL_IN_GCS\", SAVED_MODEL_PATH+\"*\")\nSERVING_DATASET = GCS_BUCKET+'/synthea/serv/seqex-00002-of-00003.tfrecords'\nos.putenv(\"SERVING_DATASET\", SERVING_DATASET)\nINFERENCE_PATH = MODEL_PATH + 'infer'\nos.putenv(\"INFERENCE_PATH\", INFERENCE_PATH)\nos.putenv(\"MODEL_NAME\", \"tf_fhir_los\")", "<b>Import dependencies. </b>", "# from apache_beam.options.pipeline_options import PipelineOptions\n# from apache_beam.options.pipeline_options import GoogleCloudOptions\n# from apache_beam.options.pipeline_options import StandardOptions\n# import apache_beam as beam\nfrom tensorflow.core.example import example_pb2\nimport tensorflow as tf\nimport time\n\nfrom proto import version_config_pb2\nfrom proto.stu3 import fhirproto_extensions_pb2\nfrom proto.stu3 import resources_pb2\n\nfrom google.protobuf import text_format\nfrom py.google.fhir.labels import label\nfrom py.google.fhir.labels import bundle_to_label\nfrom py.google.fhir.seqex import bundle_to_seqex\nfrom py.google.fhir.models import model\nfrom py.google.fhir.models.model import make_estimator", "<b>Optionally, enable logging for debugging.</b>", "import logging\nlogger = logging.getLogger()\n#logger.setLevel(logging.INFO)\nlogger.setLevel(logging.ERROR)", "<b> Previous step saved Sequence Examples into GCS. Let's examine file size and location of the Sequence Examples we will use of the inference. </b>", "%bash\ngsutil ls -l ${SEQEX_IN_GCS}", "<h2> 2. Deploy and Run ML Model on Cloud ML</h2>\n<ul>\n <li>A pre-trained ML Model which was exported to GCS in step 1 will be deployed to Cloud ML Serving.</li>\n</ul>\n<b>2a. Let's start with exporting our model for serving.<b>", "from py.google.fhir.models.model import get_serving_input_fn\nhparams = model.create_hparams()\ntime_crossed_features = [\n cross.split(':') for cross in hparams.time_crossed_features if cross\n ]\nLABEL_VALUES = ['less_or_equal_3', '3_7', '7_14', 'above_14']\nestimator = make_estimator(hparams, LABEL_VALUES, MODEL_PATH)\nserving_input_fn = get_serving_input_fn(hparams.dedup, hparams.time_windows, hparams.include_age, hparams.categorical_context_features, hparams.sequence_features, time_crossed_features)\nexport_dir = estimator.export_savedmodel(SAVED_MODEL_PATH, serving_input_fn)\nos.putenv(\"MODEL_BINARY\", export_dir)", "<b>2b. List all the models deployed currently in the Cloud ML Engine</b>", "%%bash\ngcloud ml-engine models list", "<b>2c. Optionally run following cell to delete previously deployed model. </b>", "%%bash\ngcloud ml-engine versions delete v1 --model ${MODEL_NAME} -q\ngcloud ml-engine models delete $MODEL_NAME -q", "<b>2d. Run following cell to create a new model if it does not exist </b>", "%%bash\ngcloud ml-engine models create $MODEL_NAME --regions=$REGION", "<b> 2e. List versions of the Model</b>", "%%bash\ngcloud ml-engine versions list --model ${MODEL_NAME}", "<b> 2f. Run following cell to create a new version of the model. Increment the version number like v1, v2, v3 </b> <br />\nOptionally, you can delete a version using: <br />\ngcloud ml-engine versions delete v1 --model ${MODEL_NAME} -q", "%%bash\n#gcloud ml-engine versions delete v1 --model ${MODEL_NAME} -q\ngcloud ml-engine versions create v1 \\\n --model ${MODEL_NAME} \\\n --origin ${MODEL_BINARY} \\\n --runtime-version 1.12", "<b> 2g. Run an inference job on CloudML engine </b>", "%%bash\nINFER_JOB_NAME=\"job_inf_$(date +%Y%m%d_%H%M%S)\"\ngcloud ml-engine jobs submit prediction $INFER_JOB_NAME --model $MODEL_NAME --version v1 --data-format tf-record --region $REGION --input-paths $SERVING_DATASET --output-path $INFERENCE_PATH\n", "<b>You can check the status of the job and other information on <a href=\"https://console.cloud.google.com/mlengine/jobs\">GCP CloudML page</a> </b>\n<b> 2h. View the prediction (output) generated by the inference job </b>", "%%bash\ngsutil cat ${INFERENCE_PATH}/prediction.results-00000-of-00001", "<b>You can check the status of the job and other information on <a href=\"https://console.cloud.google.com/mlengine/jobs\">GCP CloudML page</a> </b>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/cloud-sql-python-connector
samples/notebooks/postgres_python_connector.ipynb
apache-2.0
[ "# Copyright 2022 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Connect to Cloud SQL using the Cloud SQL Python Connector\n\nThis notebook will be demonstrating how to connect and query data from a Cloud SQL database in an easy and efficient way all from within a jupyter style notebook! Let's have some fun!\n📒 Using this interactive notebook\nClick the run icons ▶️ of each section within this notebook.\n\n💡 Alternatively, you can run the currently selected cell with Ctrl + Enter (or ⌘ + Enter on a Mac).\n⚠️ To avoid any errors, wait for each section to finish in their order before clicking the next “run” icon.\n\nThis sample must be connected to a Google Cloud project, but nothing else is needed other than your Google Cloud project.\nYou can use an existing project. Alternatively, you can create a new Cloud project with cloud credits for free.\n🐍 Cloud SQL Python Connector\nTo connect and access our Cloud SQL database instance(s) we will leverage the Cloud SQL Python Connector.\nThe Cloud SQL Python Connector is a library that can be used alongside a database driver to allow users to easily connect to a Cloud SQL database without having to manually allowlist IP or manage SSL certificates. 🥳 🎉 🤩\n♥️ Benefits of Using a Connector\nUsing a Cloud SQL connector provides the following benefits:\n\n🔑 IAM Authorization: uses IAM permissions to control who/what can connect to your Cloud SQL instances.\n🔒 Improved Security: uses robust, updated TLS 1.3 encryption and identity verification between the client connector and the server-side proxy, independent of the database protocol.\n👍 Convenience: removes the requirement to use and distribute SSL certificates, as well as manage firewalls or source/destination IP addresses.\n🪪 IAM DB Authentication (optional): provides support for Cloud SQL’s automatic IAM DB AuthN feature.\n\n📱 Supported Dialects/Drivers\nGoogle Cloud SQL and the Python Connector currently support the following dialects of SQL: MySQL, PostgreSQL, and SQL Server.\nDepending on which dialect you are using for your relational database(s) the Python Connector will utilize a different database driver.\nSUPPORTED DRIVERS:\n\npymysql (MySQL) 🐬\npg8000 (PostgreSQL) 🐘\npytds (SQL Server) 🗄\n\nTherefore, depending on the dialect of your database you will need to switch to the corresponding notebook!\n📗 MySQL Notebook\n📘 PostgreSQL Notebook (this notebook)\n📕 SQL Server Notebook\n🚧 Getting Started\nThis notebook requires the following steps to be completed in order to successfully make Cloud SQL connections with the Cloud SQL Python Connector.\n🔐 Authenticate to Google Cloud within Colab\nAuthenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.", "from google.colab import auth\n\nauth.authenticate_user()", "🔗 Connect Your Google Cloud Project\nTime to connect your Google Cloud Project to this notebook so that you can leverage Google Cloud from within Colab. 🏅 😀", "#@markdown Please fill in the value below with your GCP project ID and then run the cell.\n\n# Please fill in these values.\nproject_id = \"\" #@param {type:\"string\"}\n\n# Quick input validations.\nassert project_id, \"⚠️ Please provide a Google Cloud project ID\"\n\n# Configure gcloud.\n!gcloud config set project {project_id}", "☁ Configure Your Google Cloud Project\nConfigure the following in your Google Cloud Project.\n\nIAM principal (user, service account, etc.) with the\nCloud SQL Client role. \n\n\n🚨 The user logged into this notebook will be used as the IAM principal and will be granted the Cloud SQL Client role.", "# grant Cloud SQL Client role to authenticated user\ncurrent_user = !gcloud auth list --filter=status:ACTIVE --format=\"value(account)\"\n\n!gcloud projects add-iam-policy-binding {project_id} \\\n --member=user:{current_user[0]} \\\n --role=\"roles/cloudsql.client\"", "Enable the Cloud SQL Admin API within your project.", "# enable Cloud SQL Admin API\n!gcloud services enable sqladmin.googleapis.com", "☁️ Setting up Cloud SQL\nA Postgres Cloud SQL instance is required for the following stages of this notebook.\n💽 Create a Postgres Instance\nRunning the below cell will verify the existence of a Cloud SQL instance or create a new one if one does not exist.\n\n⏳ - Creating a Cloud SQL instance may take a few minutes.", "#@markdown Please fill in the both the Google Cloud region and name of your Cloud SQL instance. Once filled in, run the cell.\n\n# Please fill in these values.\nregion = \"us-central1\" #@param {type:\"string\"}\ninstance_name = \"\" #@param {type:\"string\"}\n\n# Quick input validations.\nassert region, \"⚠️ Please provide a Google Cloud region\"\nassert instance_name, \"⚠️ Please provide the name of your instance\"\n\n# check if Cloud SQL instance exists in the provided region\ndatabase_version = !gcloud sql instances describe {instance_name} --format=\"value(databaseVersion)\"\nif database_version[0].startswith(\"POSTGRES\"):\n print(\"Found existing Postgres Cloud SQL Instance!\")\nelse:\n print(\"Creating new Cloud SQL instance...\")\n password = input(\"Please provide a password to be used for 'postgres' database user: \")\n !gcloud sql instances create {instance_name} --database-version=POSTGRES_14 \\\n --region={region} --cpu=1 --memory=4GB --root-password={password} \\\n --database-flags=cloudsql.iam_authentication=On", "🎬 Create a Movies Database\nA movies database will be used in later steps when connecting to and querying a Cloud SQL database.\nTo create a movies database within your Cloud SQL instance run the below command:", "!gcloud sql databases create movies --instance={instance_name}", "🥷 Create Batman Database User\nTo create the batman database user that is used throughout the notebook, run the following gcloud command.", "!gcloud sql users create batman \\\n --instance={instance_name} \\\n --password=\"robin\"", "<img src='https://i.pinimg.com/originals/12/64/dd/1264dd5ff31fbc65c5edbb5e1a71830e.gif' class=\"center\"/>\n🐍 Python Connector Usage\nLet's now connect to Cloud SQL using the Python Connector! 🚀 ⭐ 🐍\n🎟 Configuring Credentials\nThe Cloud SQL Python Connector uses Application Default Credentials (ADC) strategy for resolving credentials. \n\n💡 Using the Python Connector in Cloud Run, App Engine, or Cloud Functions will automatically use the service account deployed with each service, allowing this step to be skipped. ✅ \n\nPlease see the google.auth package documentation for more information on how these credentials are sourced.\nThis means setting default credentials was previously done for you when you ran:\n```python\nfrom google.colab import auth\nauth.authenticate_user()\n```\n💻 Install Code Dependencies\nIt is recommended to use the Connector alongside a library that can create connection pools, such as SQLAlchemy. \nThis will allow for connections to remain open and be reused, reducing connection overhead and the number of connections needed\nLet's pip install the Cloud SQL Python Connector as well as SQLAlchemy, using the below command.", "# install dependencies\nimport sys\n!{sys.executable} -m pip install cloud-sql-python-connector[\"pg8000\"] SQLAlchemy", "🐘 Connect to a Postgres Instance\nWe are now ready to connect to a Postgres instance using the Cloud SQL Python Connector! 🐍 ⭐ ☁\nLet's set some parameters that are needed to connect properly to a Cloud SQL instance:\n* INSTANCE_CONNECTION_NAME : The connection name to your Cloud SQL Instance, takes the form PROJECT_ID:REGION:INSTANCE_NAME.\n* DB_USER : The user that the connector will use to connect to the database.\n* DB_PASS : The password of the DB_USER.\n* DB_NAME : The name of the database on the Cloud SQL instance to connect to.", "# initialize parameters\nINSTANCE_CONNECTION_NAME = f\"{project_id}:{region}:{instance_name}\" # i.e demo-project:us-central1:demo-instance\nprint(f\"Your instance connection name is: {INSTANCE_CONNECTION_NAME}\")\nDB_USER = \"batman\"\nDB_PASS = \"robin\"\nDB_NAME = \"movies\"", "✅ Basic Usage\nTo connect to Cloud SQL using the connector, inititalize a Connector object and call its connect method with the proper input parameters.\nThe connect method takes in the parameters we previously defined, as well as a few additional parameters such as: \n* driver: The name of the database driver to connect with.\n* ip_type (optional): The IP type (public or private) used to connect. IP types can be either IPTypes.PUBLIC or IPTypes.PRIVATE. (Example)\n* enable_iam_auth: (optional) Boolean enabling IAM based authentication. (Example)\nLet's show an example! 🤘 🙌", "from google.cloud.sql.connector import Connector\nimport sqlalchemy\n\n# initialize Connector object\nconnector = Connector()\n\n# function to return the database connection object\ndef getconn():\n conn = connector.connect(\n INSTANCE_CONNECTION_NAME,\n \"pg8000\",\n user=DB_USER,\n password=DB_PASS,\n db=DB_NAME\n )\n return conn\n\n# create connection pool with 'creator' argument to our connection object function\npool = sqlalchemy.create_engine(\n \"postgresql+pg8000://\",\n creator=getconn,\n)", "To use this connector with SQLAlchemy, we use the creator argument for sqlalchemy.create_engine\nNow that we have established a connection pool, let's write a query! 🎉 📝", "# connect to connection pool\nwith pool.connect() as db_conn:\n # create ratings table in our movies database\n db_conn.execute(\n \"CREATE TABLE IF NOT EXISTS ratings \"\n \"( id SERIAL NOT NULL, title VARCHAR(255) NOT NULL, \"\n \"genre VARCHAR(255) NOT NULL, rating FLOAT NOT NULL, \"\n \"PRIMARY KEY (id));\"\n )\n # insert data into our ratings table\n insert_stmt = sqlalchemy.text(\n \"INSERT INTO ratings (title, genre, rating) VALUES (:title, :genre, :rating)\",\n )\n\n # insert entries into table\n db_conn.execute(insert_stmt, title=\"Batman Begins\", genre=\"Action\", rating=8.5)\n db_conn.execute(insert_stmt, title=\"Star Wars: Return of the Jedi\", genre=\"Action\", rating=9.1)\n db_conn.execute(insert_stmt, title=\"The Breakfast Club\", genre=\"Drama\", rating=8.3)\n\n # query and fetch ratings table\n results = db_conn.execute(\"SELECT * FROM ratings\").fetchall()\n\n # show results\n for row in results:\n print(row)", "You have successfully been able to connect to a Cloud SQL instance from this notebook and make a query. YOU DID IT! 🕺 🎊 💃\n<img src=https://media.giphy.com/media/MtHGs1yo4FFKrIs55L/giphy.gif />\nTo close the Connector object's background resources, call it's close() method at the end of your code as follows:", "# cleanup connector object\nconnector.close()", "🪪 IAM Database Authentication\nAutomatic IAM database authentication is supported for Postgres Cloud SQL instances. \n\n💡 This allows an IAM user to establish an authenticated connection to a Postgres database without having to set a password and enabling the enable_iam_auth parameter in the connector's connect method.\n🚨 If you are using a pre-existing Cloud SQL instance within this notebook you may need to configure Cloud SQL instance to allow IAM authentication by setting the cloudsql.iam_authentication database flag to On. \n(Cloud SQL instances created within this notebook already have it enabled)\n\nIAM principals wanting to use IAM authentication to connect to a Cloud SQL instance require the Cloud SQL Instance User and Cloud SQL Client IAM role.\nLet's add the Cloud SQL Instance User role to the IAM account logged into this notebook. (Client role previously granted)", "# add Cloud SQL Instance User role to current logged in IAM user\n!gcloud projects add-iam-policy-binding {project_id} \\\n --member=user:{current_user[0]} \\\n --role=\"roles/cloudsql.instanceUser\"", "Now the current IAM user can be added to the Cloud SQL instance as an IAM database user.", "# add current logged in IAM user to database\n!gcloud sql users create {current_user[0]} \\\n --instance={instance_name} \\\n --type=cloud_iam_user", "Finally, let's update our getconn function to connect to our Cloud SQL instance with IAM database authentication enabled.\n\n⚠️ The below sample is a limited example as it logs in to the Cloud SQL instance and outputs the current time. By default new IAM database users have no permissions on a Cloud SQL instance. To connect to specific tables and perform more complex queries, permissions must be granted at the database level. (Grant Database Privileges to the IAM user)", "from google.cloud.sql.connector import Connector\nimport sqlalchemy\n\n# IAM database user parameter (IAM user's email)\nIAM_USER = current_user[0]\n\n# initialize connector\nconnector = Connector()\n\n# getconn now using IAM user and requiring no password with IAM Auth enabled\ndef getconn():\n conn = connector.connect(\n INSTANCE_CONNECTION_NAME,\n \"pg8000\",\n user=IAM_USER,\n db=\"postgres\",\n enable_iam_auth=True\n )\n return conn\n\n# create connection pool\npool = sqlalchemy.create_engine(\n \"postgresql+pg8000://\",\n creator=getconn,\n)\n\n# connect to connection pool\nwith pool.connect() as db_conn:\n # get current datetime from database\n results = db_conn.execute(\"SELECT NOW()\").fetchone()\n\n # output time\n print(\"Current time: \", results[0])\n\n# cleanup connector\nconnector.close()", "Sucess! You were able to connect to Cloud SQL as an IAM authenticated user using the Cloud SQL Python Connector! 🍾 👏 🏆\n<img src=\"https://media.giphy.com/media/YTbZzCkRQCEJa/giphy.gif\" />\n🗑 Clean Up Notebook Resources\nMake sure to delete your Cloud SQL instance when your are finished with this notebook to avoid further costs. 💸 💰", "# delete Cloud SQL instance\n!gcloud sql instances delete {instance_name}", "✍ Appendix\nAdditional information provided for connecting to a Cloud SQL instance using private IP connections.\n🔒 Using Private IP Connections\nBy default the connector connects to the Cloud SQL instance database using a Public IP address.\nPrivate IP connections are also supported by the connector and can be easily enabled through the ip_type parameter in the connector's connect method.\n\n⚠️ To connect via Private IP, the Cloud SQL instance being connected to must have a Private IP address configured within a VPC Network. (How to Configure Private IP)\n🚫 The below cell is a working sample but will not work within this notebook as the notebook is not within your VPC Network! The cell should be copied into an environment (Cloud Run, Cloud Functions, App Engine etc.) that has access to the VPC Network.\nConnecting Cloud Run to a VPC Network\n\nLet's update our getconn function to connect to our Cloud SQL instance with Private IP.", "from google.cloud.sql.connector import Connector, IPTypes\nimport sqlalchemy\n\n# initialize connector\nconnector = Connector()\n\n# getconn now set to private IP\ndef getconn():\n conn = connector.connect(\n INSTANCE_CONNECTION_NAME, # <PROJECT-ID>:<REGION>:<INSTANCE-NAME>\n \"pg8000\",\n user=DB_USER,\n password=DB_PASS,\n db=DB_NAME,\n ip_type=IPTypes.PRIVATE\n )\n return conn\n\n# create connection pool\npool = sqlalchemy.create_engine(\n \"postgresql+pg8000://\",\n creator=getconn,\n)\n\n# connect to connection pool\nwith pool.connect() as db_conn:\n # query database and fetch results\n results = db_conn.execute(\"SELECT * FROM ratings\").fetchall()\n\n # show results\n for row in results:\n print(row)\n\n# cleanup connector\nconnector.close()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
quantopian/research_public
notebooks/data/psychsignal.stocktwits/notebook.ipynb
apache-2.0
[ "PsychSignal: StockTwits Trader Mood (All Fields)\nIn this notebook, we'll take a look at PsychSignal's StockTwits Trader Mood (All Fields) dataset, available on the Quantopian Store. This dataset spans 2009 through the current day, and documents the mood of traders based on their messages.\nNotebook Contents\nThere are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.\n\n<a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.\n<a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting.\n\nFree samples and limits\nOne key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.\nThere is a free version of this dataset as well as a paid one. The free sample includes data until 2 months prior to the current date.\nTo access the most up-to-date values for this data set for trading a live algorithm (as with other partner sets), you need to purchase acess to the full set.\nWith preamble in place, let's get started:\n<a id='interactive'></a>\nInteractive Overview\nAccessing the data with Blaze and Interactive on Research\nPartner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.\nBlaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.\nIt is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.\nHelpful links:\n* Query building for Blaze\n* Pandas-to-Blaze dictionary\n* SQL-to-Blaze dictionary.\nOnce you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:\n\nfrom odo import odo\nodo(expr, pandas.DataFrame)\n\nTo see how this data can be used in your algorithm, search for the Pipeline Overview section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a>", "# import the free sample of the dataset\nfrom quantopian.interactive.data.psychsignal import stocktwits_free as dataset\n\n# or if you want to import the full dataset, use:\n# from quantopian.interactive.data.psychsignal import stocktwits\n\n# import data operations\nfrom odo import odo\n# import other libraries we will use\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Let's use blaze to understand the data a bit using Blaze dshape()\ndataset.dshape\n\n# And how many rows are there?\n# N.B. we're using a Blaze function to do this, not len()\ndataset.count()\n\n# Let's see what the data looks like. We'll grab the first three rows.\ndataset[:3]", "There are two versions of each data set from PsychSignal. A simple version with fewer fields and full version with more fields. This is an basic data set with fewer fields.\nLet's go over the columns:\n- asof_date: The date to which this data applies.\n- symbol: stock ticker symbol of the affected company.\n- source: the same value for all records in this data set\n- bull_scored_messages: total count of bullish sentiment messages scored by PsychSignal's algorithm\n- bear_scored_messages: total count of bearish sentiment messages scored by PsychSignal's algorithm\n- bullish_intensity: score for each message's language for the stength of the bullishness present in the messages on a 0-4 scale. 0 indicates no bullish sentiment measured, 4 indicates strongest bullish sentiment measured. 4 is rare\n- bearish_intensity: score for each message's language for the stength of the bearish present in the messages on a 0-4 scale. 0 indicates no bearish sentiment measured, 4 indicates strongest bearish sentiment measured. 4 is rare\n- total_scanned_messages: number of messages coming through PsuchSignal's feeds and attributable to a symbol regardless of whether the PsychSignal sentiment engine can score them for bullish or bearish intensity- timestamp: this is our timestamp on when we registered the data.\n- bull_minus_bear: subtracts the bearish intesity from the bullish intensity [BULL - BEAR] to rpovide an immediate net score.\n- bull_bear_msg_ratio: the ratio between bull scored messages and bear scored messages.\n- sid: the equity's unique identifier. Use this instead of the symbol.\nWe've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.\nWe can select columns and rows with ease. Below, we'll fetch all rows for Apple (sid 24) and explore the scores a bit with a chart.", "# Filtering for AAPL\naapl = dataset[dataset.sid == 24]\naapl_df = odo(aapl.sort('asof_date'), pd.DataFrame)\nplt.plot(aapl_df.asof_date, aapl_df.bull_scored_messages, marker='.', linestyle='None', color='r')\nplt.plot(aapl_df.asof_date, pd.rolling_mean(aapl_df.bull_scored_messages, 30))\nplt.xlabel(\"As Of Date (asof_date)\")\nplt.ylabel(\"Count of Bull Messages\")\nplt.title(\"Count of Bullish Messages for AAPL\")\nplt.legend([\"Bull Messages - Single Day\", \"30 Day Rolling Average\"], loc=2)", "<a id='pipeline'></a>\nPipeline Overview\nAccessing the data in your algorithms & research\nThe only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:\nImport the data set here\n\nfrom quantopian.pipeline.data.psychsignal import (\nstocktwits_free\n)\n\nThen in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:\n\npipe.add(stocktwits_free.total_scanned_messages.latest, 'total_scanned_messages')", "# Import necessary Pipeline modules\nfrom quantopian.pipeline import Pipeline\nfrom quantopian.research import run_pipeline\nfrom quantopian.pipeline.factors import AverageDollarVolume\n\n# For use in your algorithms\n# Using the full paid dataset in your pipeline algo\n# from quantopian.pipeline.data.psychsignal import stocktwits\n\n# Using the free sample in your pipeline algo\nfrom quantopian.pipeline.data.psychsignal import stocktwits_free ", "Now that we've imported the data, let's take a look at which fields are available for each dataset.\nYou'll find the dataset, the available fields, and the datatypes for each of those fields.", "print \"Here are the list of available fields per dataset:\"\nprint \"---------------------------------------------------\\n\"\n\ndef _print_fields(dataset):\n print \"Dataset: %s\\n\" % dataset.__name__\n print \"Fields:\"\n for field in list(dataset.columns):\n print \"%s - %s\" % (field.name, field.dtype)\n print \"\\n\"\n\nfor data in (stocktwits_free ,):\n _print_fields(data)\n\n\nprint \"---------------------------------------------------\\n\"", "Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.\nThis is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread:\nhttps://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters", "# Let's see what this data looks like when we run it through Pipeline\n# This is constructed the same way as you would in the backtester. For more information\n# on using Pipeline in Research view this thread:\n# https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters\npipe = Pipeline()\n \npipe.add(stocktwits_free.total_scanned_messages.latest,\n 'total_scanned_messages')\npipe.add(stocktwits_free.bear_scored_messages .latest,\n 'bear_scored_messages ')\npipe.add(stocktwits_free.bull_scored_messages .latest,\n 'bull_scored_messages ')\npipe.add(stocktwits_free.bull_bear_msg_ratio .latest,\n 'bull_bear_msg_ratio ')\n\n# Setting some basic liquidity strings (just for good habit)\ndollar_volume = AverageDollarVolume(window_length=20)\ntop_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000\n\npipe.set_screen(top_1000_most_liquid &\n (stocktwits_free.total_scanned_messages.latest>20))\n\n# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.\npipe.show_graph(format='png')\n\n# run_pipeline will show the output of your pipeline\npipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')\npipe_output", "Taking what we've seen from above, let's see how we'd move that into the backtester.", "# This section is only importable in the backtester\nfrom quantopian.algorithm import attach_pipeline, pipeline_output\n\n# General pipeline imports\nfrom quantopian.pipeline import Pipeline\nfrom quantopian.pipeline.factors import AverageDollarVolume\n\n# Import the datasets available\n# For use in your algorithms\n# Using the full paid dataset in your pipeline algo\n# from quantopian.pipeline.data.psychsignal import stocktwits\n\n# Using the free sample in your pipeline algo\nfrom quantopian.pipeline.data.psychsignal import stocktwits_free\n\ndef make_pipeline():\n # Create our pipeline\n pipe = Pipeline()\n \n # Screen out penny stocks and low liquidity securities.\n dollar_volume = AverageDollarVolume(window_length=20)\n is_liquid = dollar_volume.rank(ascending=False) < 1000\n \n # Create the mask that we will use for our percentile methods.\n base_universe = (is_liquid)\n\n # Add pipeline factors\n pipe.add(stocktwits_free.total_scanned_messages.latest,\n 'total_scanned_messages')\n pipe.add(stocktwits_free.bear_scored_messages .latest,\n 'bear_scored_messages ')\n pipe.add(stocktwits_free.bull_scored_messages .latest,\n 'bull_scored_messages ')\n pipe.add(stocktwits_free.bull_bear_msg_ratio .latest,\n 'bull_bear_msg_ratio ')\n\n # Set our pipeline screens\n pipe.set_screen(is_liquid)\n return pipe\n\ndef initialize(context):\n attach_pipeline(make_pipeline(), \"pipeline\")\n \ndef before_trading_start(context, data):\n results = pipeline_output('pipeline')", "Now you can take that and begin to use it as a building block for your algorithms, for more examples on how to do that you can visit our <a href='https://www.quantopian.com/posts/pipeline-factor-library-for-data'>data pipeline factor library</a>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-1/cmip6/models/sandbox-3/ocnbgchem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: TEST-INSTITUTE-1\nSource ID: SANDBOX-3\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:43\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-3', 'ocnbgchem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\n3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\n4. Key Properties --&gt; Transport Scheme\n5. Key Properties --&gt; Boundary Forcing\n6. Key Properties --&gt; Gas Exchange\n7. Key Properties --&gt; Carbon Chemistry\n8. Tracers\n9. Tracers --&gt; Ecosystem\n10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\n11. Tracers --&gt; Ecosystem --&gt; Zooplankton\n12. Tracers --&gt; Disolved Organic Matter\n13. Tracers --&gt; Particules\n14. Tracers --&gt; Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Elemental Stoichiometry\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n", "1.5. Elemental Stoichiometry Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.7. Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Damping\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for passive tracers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "2.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for passive tracers (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for biology sources and sinks", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "3.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transport scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n", "4.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTransport scheme used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4.3. Use Different Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how atmospheric deposition is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n", "5.2. River Input\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river input is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n", "5.3. Sediments From Boundary Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Sediments From Explicit Model\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from explicit sediment model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.2. CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe CO2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.3. O2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs O2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.4. O2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe O2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. DMS Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs DMS gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.6. DMS Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify DMS gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.7. N2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.8. N2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.9. N2O Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2O gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.10. N2O Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2O gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.11. CFC11 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC11 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.12. CFC11 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.13. CFC12 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC12 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.14. CFC12 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.15. SF6 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs SF6 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.16. SF6 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify SF6 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.17. 13CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.18. 13CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.19. 14CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.20. 14CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.21. Other Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any other gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how carbon chemistry is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n", "7.2. PH Scale\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.3. Constants If Not OMIP\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Sulfur Cycle Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sulfur cycle modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Nutrients Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Nitrous Species If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous species.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.5. Nitrous Processes If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous processes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Tracers --&gt; Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Upper Trophic Levels Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefine how upper trophic level are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of phytoplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n", "10.2. Pft\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Tracers --&gt; Ecosystem --&gt; Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of zooplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nZooplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Tracers --&gt; Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there bacteria representation ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Lability\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Tracers --&gt; Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Types If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Size If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n", "13.4. Size If Discrete\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.5. Sinking Speed If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Tracers --&gt; Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n", "14.2. Abiotic Carbon\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs abiotic carbon modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.3. Alkalinity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is alkalinity modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.14/_downloads/plot_cwt_sensor_connectivity.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute seed based time-frequency connectivity in sensor space\nComputes the connectivity between a seed-gradiometer close to the visual cortex\nand all other gradiometers. The connectivity is computed in the time-frequency\ndomain using Morlet wavelets and the debiased Squared Weighted Phase Lag Index\n[1]_ is used as connectivity metric.\n.. [1] Vinck et al. \"An improved index of phase-synchronization for electro-\n physiological data in the presence of volume-conduction, noise and\n sample-size bias\" NeuroImage, vol. 55, no. 4, pp. 1548-1565, Apr. 2011.", "# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\nimport mne\nfrom mne import io\nfrom mne.connectivity import spectral_connectivity, seed_target_indices\nfrom mne.datasets import sample\nfrom mne.time_frequency import AverageTFR\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Add a bad channel\nraw.info['bads'] += ['MEG 2443']\n\n# Pick MEG gradiometers\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,\n exclude='bads')\n\n# Create epochs for left-visual condition\nevent_id, tmin, tmax = 3, -0.2, 0.5\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),\n preload=True)\n\n# Use 'MEG 2343' as seed\nseed_ch = 'MEG 2343'\npicks_ch_names = [raw.ch_names[i] for i in picks]\n\n# Create seed-target indices for connectivity computation\nseed = picks_ch_names.index(seed_ch)\ntargets = np.arange(len(picks))\nindices = seed_target_indices(seed, targets)\n\n# Define wavelet frequencies and number of cycles\ncwt_frequencies = np.arange(7, 30, 2)\ncwt_n_cycles = cwt_frequencies / 7.\n\n# Run the connectivity analysis using 2 parallel jobs\nsfreq = raw.info['sfreq'] # the sampling frequency\ncon, freqs, times, _, _ = spectral_connectivity(\n epochs, indices=indices,\n method='wpli2_debiased', mode='cwt_morlet', sfreq=sfreq,\n cwt_frequencies=cwt_frequencies, cwt_n_cycles=cwt_n_cycles, n_jobs=1)\n\n# Mark the seed channel with a value of 1.0, so we can see it in the plot\ncon[np.where(indices[1] == seed)] = 1.0\n\n# Show topography of connectivity from seed\ntitle = 'WPLI2 - Visual - Seed %s' % seed_ch\n\nlayout = mne.find_layout(epochs.info, 'meg') # use full layout\n\ntfr = AverageTFR(epochs.info, con, times, freqs, len(epochs))\ntfr.plot_topo(fig_facecolor='w', font_color='k', border='k')" ]
[ "code", "markdown", "code", "markdown", "code" ]
fujii-team/GPinv
notebooks/Abel_inversion.ipynb
apache-2.0
[ "An example of the Nonlinear inference.\nThis notebook briefly shows an inference examples for non-linear model with GPinv\nKeisuke Fujii 3rd Oct. 2016\nSynthetic observation\nConsider we observe a cylindrical transparent mediam with multiple ($N$) lines-of-sight, as shown below.\n<img src=figs/abel_inversion.png width=240pt>\nThe local emission intensity $g$ is a function of the radius $r$.\nThe observed emission intensity $\\mathbf{Y}$ is a result of the integration along the line-of-sight as\n$$\n\\mathbf{Y} = \\int_{x} g(r) dx + \\mathbf{e}\n$$\nwhere $\\mathbf{e}$ is a i.i.d. Gaussian noise.\nWe divided $g$ into $n$ discrete points $\\mathbf{g}$, then the above integration can be approximated as follows\n$$\n\\mathbf{Y} = \\mathrm{A} \\mathbf{g} + \\mathbf{e}\n$$\nNon-linear model and transform\nTo make sure $g(r)$ is positive, we define new function $f$,\n$$\ng(r) = \\exp(f(r))\n$$\nHere, we assume $f(r)$ follows the Gaussian Process with kernel $\\mathrm{K}$.\nThis transformation makes the problem non-linear.\nIn this notebook, we infer $\\mathbf{g}$ by \n1. Stochastic approximation of the variational Gaussian process.\n2. Markov Chain Monte-Carlo (MCMC) method.\nImport several libraries including GPinv", "import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nimport tensorflow as tf\nimport sys\n# In ../testing/ dir, we prepared a small script for generating the above matrix A\nsys.path.append('../testing/')\nimport make_LosMatrix\n# Import GPinv\nimport GPinv", "Synthetic signals\nHere, we make a synthetic measurement.\nThe synthetic signal $\\mathrm{y}$ is simulated from the grand truth solution $g_true$ and random gaussian noise.", "n = 30\nN = 40\n# radial coordinate\nr = np.linspace(0, 1., n)\n# synthetic latent function\nf = np.exp(-(r-0.3)*(r-0.3)/0.1) + np.exp(-(r+0.3)*(r+0.3)/0.1)\n\n# plotting the latent function\nplt.figure(figsize=(5,3))\nplt.plot(r, f)\nplt.xlabel('$r$: Radial coordinate')\nplt.ylabel('$f$: Function value')", "Prepare the synthetic signal.", "# los height\nz = np.linspace(-0.9,0.9, N)\n# Los-matrix\nA = make_LosMatrix.make_LosMatrix(r, z)\n\n# noise amplitude \ne_amp = 0.1\n# synthetic observation\ny = np.dot(A, f) + e_amp * np.random.randn(N)\n\nplt.figure(figsize=(5,3))\nplt.plot(z, y, 'o', [-1,1],[0,0], '--k', ms=5)\nplt.xlabel('$z$: Los-height')\nplt.ylabel('$y$: Observation')", "Inference\nIn order to carry out an inference, a custom likelihood, which calculates $p(\\mathbf{Y}|\\mathbf{f})$ with given $\\mathbf{f}$, must be prepared according to the problem.\nThe method to be implemented is logp(f,Y) method, that calculates log-likelihood for data Y with given f", "class AbelLikelihood(GPinv.likelihoods.Likelihood):\n def __init__(self, Amat):\n GPinv.likelihoods.Likelihood.__init__(self)\n self.Amat = GPinv.param.DataHolder(Amat)\n self.variance = GPinv.param.Param(np.ones(1), GPinv.transforms.positive)\n\n def logp(self, F, Y):\n Af = self.sample_F(F)\n Y = tf.tile(tf.expand_dims(Y, 0), [tf.shape(F)[0],1,1])\n return GPinv.densities.gaussian(Af, Y, self.variance)\n \n def sample_F(self, F):\n N = tf.shape(F)[0]\n Amat = tf.tile(tf.expand_dims(self.Amat,0), [N, 1,1])\n Af = tf.batch_matmul(Amat, tf.exp(F))\n return Af\n \n def sample_Y(self, F):\n f_sample = self.sample_F(F)\n return f_sample + tf.random_normal(tf.shape(f_sample)) * tf.sqrt(self.variance)", "Variational inference by StVGP\nIn StVGP, we evaluate the posterior $p(\\mathbf{f}|\\mathbf{y},\\theta)$ by approximating as a multivariate Gaussian distribution.\nThe hyperparameters are obtained at the maximum of the evidence lower bound (ELBO) $p(\\mathbf{y}|\\theta)$.\nKernel\nThe statistical property is interpreted in Gaussian Process kernel.\nIn our example, since $f$ is a cylindrically symmetric function, we adopt RBF_csym kernel.\nMeanFunction\nTo make $f$ scale invariant, we added the constant mean_function to $f$.", "model_stvgp = GPinv.stvgp.StVGP(r.reshape(-1,1), y.reshape(-1,1), \n kern = GPinv.kernels.RBF_csym(1,1),\n mean_function = GPinv.mean_functions.Constant(1),\n likelihood=AbelLikelihood(A),\n num_samples=10)", "Check the initial estimate", "# Data Y should scatter around the transform F of the GP function f.\nsample_F = model_stvgp.sample_F(100)\n\nplt.figure(figsize=(5,3))\nplt.plot(z, y, 'o', [-1,1],[0,0], '--k', ms=5)\nfor s in sample_F:\n plt.plot(z, s, '-k', alpha=0.1, lw=1)\nplt.xlabel('$z$: Los-height')\nplt.ylabel('$y$: Observation')", "Iteration\nAlthough the initial estimate is not very good, we start iteration.", "# This function is just for the visualization of the iteration\nfrom IPython import display\n\nlogf = []\ndef logger(x):\n if (logger.i % 10) == 0:\n obj = -model_stvgp._objective(x)[0]\n logf.append(obj)\n # display\n if (logger.i % 100) ==0:\n plt.clf()\n plt.plot(logf, '--ko', markersize=3, linewidth=1)\n plt.ylabel('ELBO')\n plt.xlabel('iteration')\n display.display(plt.gcf())\n display.clear_output(wait=True)\n logger.i+=1\nlogger.i = 1\n\nplt.figure(figsize=(5,3))\n# Rough optimization by scipy.minimize\nmodel_stvgp.optimize()\n# Final optimization by tf.train\ntrainer = tf.train.AdamOptimizer(learning_rate=0.002)\n_= model_stvgp.optimize(trainer, maxiter=5000, callback=logger)\n\ndisplay.clear_output(wait=True)", "Plot results", "# Predict the latent function f, which follows Gaussian Process\nr_new = np.linspace(0.,1.2, 40)\nf_pred, f_var = model_stvgp.predict_f(r_new.reshape(-1,1))\n\n# Data Y should scatter around the transform F of the GP function f.\nsample_F = model_stvgp.sample_F(100)\n\nplt.figure(figsize=(8,3))\nplt.subplot(1,2,1)\nf_plus = np.exp(f_pred.flatten() + 2.*np.sqrt(f_var.flatten()))\nf_minus = np.exp(f_pred.flatten() - 2.*np.sqrt(f_var.flatten()))\nplt.fill_between(r_new, f_plus, f_minus, alpha=0.2)\nplt.plot(r_new, np.exp(f_pred.flatten()), label='StVGP',lw=1.5)\nplt.plot(r, f, '-r', label='true',lw=1.5)# ground truth\nplt.xlabel('$r$: Radial coordinate')\nplt.ylabel('$g$: Latent function')\nplt.legend(loc='best')\n\nplt.subplot(1,2,2)\nfor s in sample_F:\n plt.plot(z, s, '-k', alpha=0.05, lw=1)\nplt.plot(z, y, 'o', ms=5)\nplt.plot(z, np.dot(A, f), 'r', label='true',lw=1.5)\nplt.xlabel('$z$: Los-height')\nplt.ylabel('$y$: Observation')\nplt.legend(loc='best')\n\nplt.tight_layout()", "MCMC\nMCMC is fully Bayesian inference.\nThe hyperparameters are numerically marginalized out.", "model_gpmc = GPinv.gpmc.GPMC(r.reshape(-1,1), y.reshape(-1,1), \n kern = GPinv.kernels.RBF_csym(1,1),\n mean_function = GPinv.mean_functions.Constant(1),\n likelihood=AbelLikelihood(A))", "Sample from posterior", "samples = model_gpmc.sample(300, thin=3, burn=500, verbose=True, epsilon=0.01, Lmax=15)", "Plot result", "r_new = np.linspace(0.,1.2, 40)\n\nplt.figure(figsize=(8,3))\n# Latent function\nplt.subplot(1,2,1)\nfor i in range(0,len(samples),3):\n s = samples[i]\n model_gpmc.set_state(s)\n f_pred, f_var = model_gpmc.predict_f(r_new.reshape(-1,1))\n plt.plot(r_new, np.exp(f_pred.flatten()), 'k',lw=1, alpha=0.1)\nplt.plot(r, f, '-r', label='true',lw=1.5)# ground truth\nplt.xlabel('$r$: Radial coordinate')\nplt.ylabel('$g$: Latent function')\nplt.legend(loc='best')\n\n# \nplt.subplot(1,2,2)\nfor i in range(0,len(samples),3):\n s = samples[i]\n model_gpmc.set_state(s)\n f_sample = model_gpmc.sample_F()\n plt.plot(z, f_sample[0], 'k',lw=1, alpha=0.1)\nplt.plot(z, y, 'o', ms=5)\nplt.plot(z, np.dot(A, f), 'r', label='true',lw=1.5)\nplt.xlabel('$z$: Los-height')\nplt.ylabel('$y$: Observation')\nplt.legend(loc='best')\n\nplt.tight_layout()", "Comparison between StVGP and GPMC\nThe StVGP makes a point estimate for the hyperparameter (variance and length-scale of the kernel, mean function value, and variance at the likelihood), \nwhile the GPMC integrate them out.\nTherefore, there is some difference between them.\nDifference in the hyperparameter estimation", "# make a histogram (posterior) for these hyperparameter estimated by GPMC\ngpmc_hyp_samples = {\n 'k_variance' : [], # variance \n 'k_lengthscale': [], # kernel lengthscale\n 'mean' : [], # mean function values\n 'lik_variance' : [], # variance for the likelihood\n}\nfor s in samples:\n model_gpmc.set_state(s)\n gpmc_hyp_samples['k_variance' ].append(model_gpmc.kern.variance.value[0])\n gpmc_hyp_samples['k_lengthscale'].append(model_gpmc.kern.lengthscales.value[0])\n gpmc_hyp_samples['mean'].append(model_gpmc.mean_function.c.value[0])\n gpmc_hyp_samples['lik_variance'].append(model_gpmc.likelihood.variance.value[0])\n\nplt.figure(figsize=(10,2))\n# kernel variance\nplt.subplot(1,4,1)\nplt.title('k_variance')\n_= plt.hist(gpmc_hyp_samples['k_variance'])\nplt.plot([model_stvgp.kern.variance.value]*2, [0,100], '-r')\nplt.subplot(1,4,2)\nplt.title('k_lengthscale')\n_= plt.hist(gpmc_hyp_samples['k_lengthscale'])\nplt.plot([model_stvgp.kern.lengthscales.value]*2, [0,100], '-r')\nplt.subplot(1,4,3)\nplt.title('mean')\n_= plt.hist(gpmc_hyp_samples['mean'])\nplt.plot([model_stvgp.mean_function.c.value]*2, [0,100], '-r')\nplt.subplot(1,4,4)\nplt.title('lik_variance')\n_= plt.hist(gpmc_hyp_samples['lik_variance'])\nplt.plot([model_stvgp.likelihood.variance.value]*2, [0,100], '-r')\n\nplt.tight_layout()\n\nprint('Here the red line shows the MAP estimate by StVGP')", "Difference in the prediction.", "r_new = np.linspace(0.,1.2, 40)\n\nplt.figure(figsize=(4,3))\n# StVGP\nf_pred, f_var = model_stvgp.predict_f(r_new.reshape(-1,1))\nf_plus = np.exp(f_pred.flatten() + 2.*np.sqrt(f_var.flatten()))\nf_minus = np.exp(f_pred.flatten() - 2.*np.sqrt(f_var.flatten()))\nplt.plot(r_new, np.exp(f_pred.flatten()), 'b', label='StVGP',lw=1.5)\nplt.plot(r_new, f_plus, '--b', r_new, f_minus, '--b', lw=1.5)\n\n# GPMC\nfor i in range(0,len(samples),3):\n s = samples[i]\n model_gpmc.set_state(s)\n f_pred, f_var = model_gpmc.predict_f(r_new.reshape(-1,1))\n plt.plot(r_new, np.exp(f_pred.flatten()), 'k',lw=1, alpha=0.1)\nplt.plot(r_new, np.exp(f_pred.flatten()), 'k',lw=1, alpha=0.1, label='GPMC')\n\nplt.xlabel('$r$: Radial coordinate')\nplt.ylabel('$g$: Latent function')\nplt.legend(loc='best')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Z0m6ie/Zombie_Code
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
mit
[ "You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.\n\nThe Python Programming Language: Functions", "x = 1\ny = 2\nx + y\n\nx", "<br>\nadd_numbers is a function that takes two numbers and adds them together.", "def add_numbers(x, y):\n return x + y\n\nadd_numbers(x, y)", "<br>\nadd_numbers updated to take an optional 3rd parameter. Using print allows printing of multiple expressions within a single cell.", "def add_numbers(x,y,z=None):\n if (z==None):\n return x+y\n else:\n return x+y+z\n\nprint(add_numbers(1, 2))\nprint(add_numbers(1, 2, 3))", "<br>\nadd_numbers updated to take an optional flag parameter.", "def add_numbers(x, y, z=None, flag=False):\n if (flag):\n print('Flag is true!')\n if (z==None):\n return x + y\n else:\n return x + y + z\n \nprint(add_numbers(1, 2, flag=True))", "<br>\nAssign function add_numbers to variable a.", "def add_numbers(x,y):\n return x+y\n\na = add_numbers\na(1,2)", "<br>\nThe Python Programming Language: Types and Sequences\n<br>\nUse type to return the object's type.", "type('This is a string')\n\ntype(None)\n\ntype(1)\n\ntype(1.0)\n\ntype(add_numbers)", "<br>\nTuples are an immutable data structure (cannot be altered).", "x = (1, 'a', 2, 'b')\ntype(x)", "<br>\nLists are a mutable data structure.", "x = [1, 'a', 2, 'b']\ntype(x)", "<br>\nUse append to append an object to a list.", "x.append(3.3)\nprint(x)", "<br>\nThis is an example of how to loop through each item in the list.", "for item in x:\n print(item)", "<br>\nOr using the indexing operator:", "i=0\nwhile( i != len(x) ):\n print(x[i])\n i = i + 1", "<br>\nUse + to concatenate lists.", "[1,2] + [3,4]", "<br>\nUse * to repeat lists.", "[1]*3", "<br>\nUse the in operator to check if something is inside a list.", "1 in [1, 2, 3]", "<br>\nNow let's look at strings. Use bracket notation to slice a string.", "x = 'This is a string'\nprint(x[0]) #first character\nprint(x[0:1]) #first character, but we have explicitly set the end character\nprint(x[0:2]) #first two characters\nprint(x[::-1])", "<br>\nThis will return the last element of the string.", "x[-1]", "<br>\nThis will return the slice starting from the 4th element from the end and stopping before the 2nd element from the end.", "x[-4:-2]", "<br>\nThis is a slice from the beginning of the string and stopping before the 3rd element.", "x[:3]", "<br>\nAnd this is a slice starting from the 3rd element of the string and going all the way to the end.", "x[3:]\n\nfirstname = 'Christopher'\nlastname = 'Brooks'\n\nprint(firstname + ' ' + lastname)\nprint(firstname*3)\nprint('Chris' in firstname)\n", "<br>\nsplit returns a list of all the words in a string, or a list split on a specific character.", "firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list\nlastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list\nprint(firstname)\nprint(lastname)", "<br>\nMake sure you convert objects to strings before concatenating.", "'Chris' + 2\n\n'Chris' + str(2)", "<br>\nDictionaries associate keys with values.", "x = {'Christopher Brooks': 'brooksch@umich.edu', 'Bill Gates': 'billg@microsoft.com'}\nx['Christopher Brooks'] # Retrieve a value by using the indexing operator\n\n\nx['Kevyn Collins-Thompson'] = \"Test Test\"\nx['Kevyn Collins-Thompson']", "<br>\nIterate over all of the keys:", "for name in x:\n print(x[name])", "<br>\nIterate over all of the values:", "for email in x.values():\n print(email)", "<br>\nIterate over all of the items in the list:", "for name, email in x.items():\n print(name)\n print(email)", "<br>\nYou can unpack a sequence into different variables:", "x = ('Christopher', 'Brooks', 'brooksch@umich.edu')\nfname, lname, email = x\n\nfname\n\nlname", "<br>\nMake sure the number of values you are unpacking matches the number of variables being assigned.", "x = ('Christopher', 'Brooks', 'brooksch@umich.edu', 'Ann Arbor')\nfname, lname, email, location = x", "<br>\nThe Python Programming Language: More on Strings", "print(\"Chris\" + 2)\n\nprint('Chris' + str(2))", "<br>\nPython has a built in method for convenient string formatting.", "sales_record = {\n'price': 3.24,\n'num_items': 4,\n'person': 'Chris'}\n\nsales_statement = '{} bought {} item(s) at a price of {} each for a total of {}'\n\nprint(sales_statement.format(sales_record['person'],\n sales_record['num_items'],\n sales_record['price'],\n sales_record['num_items']*sales_record['price']))\n", "<br>\nReading and Writing CSV files\n<br>\nLet's import our datafile mpg.csv, which contains fuel economy data for 234 cars.\n\nmpg : miles per gallon\nclass : car classification\ncty : city mpg\ncyl : # of cylinders\ndispl : engine displacement in liters\ndrv : f = front-wheel drive, r = rear wheel drive, 4 = 4wd\nfl : fuel (e = ethanol E85, d = diesel, r = regular, p = premium, c = CNG)\nhwy : highway mpg\nmanufacturer : automobile manufacturer\nmodel : model of car\ntrans : type of transmission\nyear : model year", "import csv\nimport pandas as pd\n\n# Nice, sets decimple point\n%precision 2\n\nwith open('mpg.csv') as csvfile:\n mpg = list(csv.DictReader(csvfile))\n\ndf = pd.read_csv('mpg.csv')\n \nmpg[:3] # The first three dictionaries in our list.\ndf", "<br>\ncsv.Dictreader has read in each row of our csv file as a dictionary. len shows that our list is comprised of 234 dictionaries.", "len(mpg)", "<br>\nkeys gives us the column names of our csv.", "mpg[0].keys()", "<br>\nThis is how to find the average cty fuel economy across all cars. All values in the dictionaries are strings, so we need to convert to float.", "sum(float(d['cty']) for d in mpg) / len(mpg)", "<br>\nSimilarly this is how to find the average hwy fuel economy across all cars.", "sum(float(d['hwy']) for d in mpg) / len(mpg)", "<br>\nUse set to return the unique values for the number of cylinders the cars in our dataset have.", "# set returns unique values\ncylinders = set(d['cyl'] for d in mpg)\ncylinders", "<br>\nHere's a more complex example where we are grouping the cars by number of cylinder, and finding the average cty mpg for each group.", "CtyMpgByCyl = []\n\nfor c in cylinders: # iterate over all the cylinder levels\n summpg = 0\n cyltypecount = 0\n for d in mpg: # iterate over all dictionaries\n if d['cyl'] == c: # if the cylinder level type matches,\n summpg += float(d['cty']) # add the cty mpg\n cyltypecount += 1 # increment the count\n CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg')\n\nCtyMpgByCyl.sort(key=lambda x: x[0])\nCtyMpgByCyl", "<br>\nUse set to return the unique values for the class types in our dataset.", "vehicleclass = set(d['class'] for d in mpg) # what are the class types\nvehicleclass", "<br>\nAnd here's an example of how to find the average hwy mpg for each class of vehicle in our dataset.", "HwyMpgByClass = []\n\nfor t in vehicleclass: # iterate over all the vehicle classes\n summpg = 0\n vclasscount = 0\n for d in mpg: # iterate over all dictionaries\n if d['class'] == t: # if the cylinder amount type matches,\n summpg += float(d['hwy']) # add the hwy mpg\n vclasscount += 1 # increment the count\n HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg')\n\nHwyMpgByClass.sort(key=lambda x: x[1])\nHwyMpgByClass", "<br>\nThe Python Programming Language: Dates and Times", "import datetime as dt\nimport time as tm", "<br>\ntime returns the current time in seconds since the Epoch. (January 1st, 1970)", "tm.time()", "<br>\nConvert the timestamp to datetime.", "dtnow = dt.datetime.fromtimestamp(tm.time())\ndtnow", "<br>\nHandy datetime attributes:", "dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime", "<br>\ntimedelta is a duration expressing the difference between two dates.", "delta = dt.timedelta(days = 100) # create a timedelta of 100 days\ndelta\n\ndt.date.today()", "<br>\ndate.today returns the current local date.", "today = dt.date.today()\n\ntoday - delta # the date 100 days ago\n\ntoday > today-delta # compare dates", "<br>\nThe Python Programming Language: Objects and map()\n<br>\nAn example of a class in python:", "class Person:\n department = 'School of Information' #a class variable\n\n def set_name(self, new_name): #a method\n self.name = new_name\n def set_location(self, new_location):\n self.location = new_location\n\nperson = Person()\nperson.set_name('Christopher Brooks')\nperson.set_location('Ann Arbor, MI, USA')\nprint('{} live in {} and works in the department {}'.format(person.name, person.location, person.department))", "<br>\nHere's an example of mapping the min function between two lists.", "store1 = [10.00, 11.00, 12.34, 2.34]\nstore2 = [9.00, 11.10, 12.34, 2.01]\ncheapest = map(min, store1, store2)\ncheapest", "<br>\nNow let's iterate through the map object to see the values.", "for item in cheapest:\n print (item)\n\npeople = ['Dr. Christopher Brooks', 'Dr. Kevyn Collins-Thompson', 'Dr. VG Vinod Vydiswaran', 'Dr. Daniel Romero']\n\ndef split_title_and_name(person):\n title = person.split(' ')[0]\n lname = person.split(' ')[-1]\n return title +\" \"+ lname\n\nlist(map(split_title_and_name, people))", "<br>\nThe Python Programming Language: Lambda and List Comprehensions\n<br>\nHere's an example of lambda that takes in three parameters and adds the first two.", "# Single function only\nmy_function = lambda a, b, c : a + b + c\n\nmy_function(1, 2, 3)\n\npeople = ['Dr. Christopher Brooks', 'Dr. Kevyn Collins-Thompson', 'Dr. VG Vinod Vydiswaran', 'Dr. Daniel Romero']\n\ndef split_title_and_name(person):\n return person.split()[0] + ' ' + person.split()[-1]\n\n#option 1\nfor person in people:\n print(split_title_and_name(person) == (lambda x: x.split()[0] + ' ' + x.split()[-1])(person))\n\n#option 2\nlist(map(split_title_and_name, people)) == list(map(lambda person: person.split()[0] + ' ' + person.split()[-1], people))", "<br>\nLet's iterate from 0 to 999 and return the even numbers.", "my_list = []\nfor number in range(0, 1000):\n if number % 2 == 0:\n my_list.append(number)\nmy_list", "<br>\nNow the same thing but with list comprehension.", "my_list = [number for number in range(0,1000) if number % 2 == 0]\nmy_list\n\ndef times_tables():\n lst = []\n for i in range(10):\n for j in range (10):\n lst.append(i*j)\n return lst\n\ntimes_tables() == [j*i for i in range(10) for j in range(10)]\n\nlowercase = 'abcdefghijklmnopqrstuvwxyz'\ndigits = '0123456789'\n \ncorrect_answer = [a+b+c+d for a in lowercase for b in lowercase for c in digits for d in digits]\n\ncorrect_answer[0:100]", "<br>\nThe Python Programming Language: Numerical Python (NumPy)", "import numpy as np", "<br>\nCreating Arrays\nCreate a list and convert it to a numpy array", "mylist = [1, 2, 3]\nx = np.array(mylist)\nx", "<br>\nOr just pass in a list directly", "y = np.array([4, 5, 6])\ny", "<br>\nPass in a list of lists to create a multidimensional array.", "m = np.array([[7, 8, 9], [10, 11, 12]])\nm", "<br>\nUse the shape method to find the dimensions of the array. (rows, columns)", "m.shape", "<br>\narange returns evenly spaced values within a given interval.", "n = np.arange(0, 30, 2) # start at 0 count up by 2, stop before 30\nn", "<br>\nreshape returns an array with the same data with a new shape.", "n = n.reshape(3, 5) # reshape array to be 3x5\nn", "<br>\nlinspace returns evenly spaced numbers over a specified interval.", "o = np.linspace(0, 4, 9) # return 9 evenly spaced values from 0 to 4\no", "<br>\nresize changes the shape and size of array in-place.", "o.resize(3, 3)\no", "<br>\nones returns a new array of given shape and type, filled with ones.", "np.ones((3, 2))", "<br>\nzeros returns a new array of given shape and type, filled with zeros.", "np.zeros((2, 3))", "<br>\neye returns a 2-D array with ones on the diagonal and zeros elsewhere.", "np.eye(3)", "<br>\ndiag extracts a diagonal or constructs a diagonal array.", "np.diag(y)", "<br>\nCreate an array using repeating list (or see np.tile)", "np.array([1, 2, 3] * 3)", "<br>\nRepeat elements of an array using repeat.", "np.repeat([1, 2, 3], 3)", "<br>\nCombining Arrays", "p = np.ones([2, 3], int)\np", "<br>\nUse vstack to stack arrays in sequence vertically (row wise).", "np.vstack([p, 2*p])", "<br>\nUse hstack to stack arrays in sequence horizontally (column wise).", "np.hstack([p, 2*p])", "<br>\nOperations\nUse +, -, *, / and ** to perform element wise addition, subtraction, multiplication, division and power.", "print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9]\nprint(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3]\n\nprint(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18]\nprint(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5]\n\nprint(x**2) # elementwise power [1 2 3] ^2 = [1 4 9]", "<br>\nDot Product: \n$ \\begin{bmatrix}x_1 \\ x_2 \\ x_3\\end{bmatrix}\n\\cdot\n\\begin{bmatrix}y_1 \\ y_2 \\ y_3\\end{bmatrix}\n= x_1 y_1 + x_2 y_2 + x_3 y_3$", "x.dot(y) # dot product 1*4 + 2*5 + 3*6\n\nz = np.array([y, y**2])\nprint(len(z)) # number of rows of array", "<br>\nLet's look at transposing arrays. Transposing permutes the dimensions of the array.", "z = np.array([y, y**2])\nz", "<br>\nThe shape of array z is (2,3) before transposing.", "z.shape", "<br>\nUse .T to get the transpose.", "z.T", "<br>\nThe number of rows has swapped with the number of columns.", "z.T.shape", "<br>\nUse .dtype to see the data type of the elements in the array.", "z.dtype", "<br>\nUse .astype to cast to a specific type.", "z = z.astype('f')\nz.dtype", "<br>\nMath Functions\nNumpy has many built in math functions that can be performed on arrays.", "a = np.array([-4, -2, 1, 3, 5])\n\na.sum()\n\na.max()\n\na.min()\n\na.mean()\n\na.std()", "<br>\nargmax and argmin return the index of the maximum and minimum values in the array.", "a.argmax()\n\na.argmin()", "<br>\nIndexing / Slicing", "s = np.arange(13)**2\ns", "<br>\nUse bracket notation to get the value at a specific index. Remember that indexing starts at 0.", "s[0], s[4], s[-1]", "<br>\nUse : to indicate a range. array[start:stop]\nLeaving start or stop empty will default to the beginning/end of the array.", "s[1:5]", "<br>\nUse negatives to count from the back.", "s[-4:]", "<br>\nA second : can be used to indicate step-size. array[start:stop:stepsize]\nHere we are starting 5th element from the end, and counting backwards by 2 until the beginning of the array is reached.", "s[-5::-2]", "<br>\nLet's look at a multidimensional array.", "r = np.arange(36)\nr.resize((6, 6))\nr", "<br>\nUse bracket notation to slice: array[row, column]", "r[2, 2]", "<br>\nAnd use : to select a range of rows or columns", "r[3, 3:6]", "<br>\nHere we are selecting all the rows up to (and not including) row 2, and all the columns up to (and not including) the last column.", "r[:2, :-1]", "<br>\nThis is a slice of the last row, and only every other element.", "r[-1, ::2]", "<br>\nWe can also perform conditional indexing. Here we are selecting values from the array that are greater than 30. (Also see np.where)", "r[r > 30]", "<br>\nHere we are assigning all values in the array that are greater than 30 to the value of 30.", "r[r > 30] = 30\nr", "<br>\nCopying Data\nBe careful with copying and modifying arrays in NumPy!\nr2 is a slice of r", "r2 = r[:3,:3]\nr2", "<br>\nSet this slice's values to zero ([:] selects the entire array)", "r2[:] = 0\nr2", "<br>\nr has also been changed!", "r", "<br>\nTo avoid this, use r.copy to create a copy that will not affect the original array", "r_copy = r.copy()\nr_copy", "<br>\nNow when r_copy is modified, r will not be changed.", "r_copy[:] = 10\nprint(r_copy, '\\n')\nprint(r)", "<br>\nIterating Over Arrays\nLet's create a new 4 by 3 array of random numbers 0-9.", "test = np.random.randint(0, 10, (4,3))\ntest", "<br>\nIterate by row:", "for row in test:\n print(row)", "<br>\nIterate by index:", "for i in range(len(test)):\n print(test[i])", "<br>\nIterate by row and index:", "for i, row in enumerate(test):\n print('row', i, 'is', row)", "<br>\nUse zip to iterate over multiple iterables.", "test2 = test**2\ntest2\n\nfor i, j in zip(test, test2):\n print(i,'+',j,'=',i+j)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tjhunter/karps
python/notebooks/Demo 2-details.ipynb
apache-2.0
[ "Bringing modularity and code reuse to Spark\nSpark does not let one define arbitrary functions and reuse them at will. In this example, we show how to decompose a problem into a set of simpler primitive functions, that nevertheless perform arbitrary operations that would not be allowed in Spark.\nWe are going to build a function that exemplifies the birthday paradox: given a set of birthdates, it will returns the number of people who happen to share a birthdate with someone else. This is easy to express using joins and groups. This function takes a dataset or a column as input (the birth dates) and returns a single number (the number of people who share the same birth day). This is an aggregation function! Our urge is of course to use it then in a different setting such as in a group, etc. As we will see, Karps allows us to write code that works for both Pandas and Spark, and that allows to plug any aggregation function in a very natural way.", "import pandas as pd\nimport karps as ks\nimport karps.functions as f\nfrom karps.display import show_phase\n\n# Make a session at the top, although it is not required immediately.\ns = ks.session(\"demo2\")", "This is an extremely small dataset:", "employees = ks.dataframe([\n (\"ACME\", \"John\", \"12/01\"),\n (\"ACME\", \"Kate\", \"09/04\"),\n (\"ACME\", \"Albert\", \"09/04\"),\n (\"Databricks\", \"Ali\", \"09/04\"),\n], schema=[\"company_name\", \"employee_name\", \"dob\"],\n name=\"employees\")\nemployees", "Now, here is the definition of the birthday paradox function. It is pretty simple code:", "# The number of people who share a birthday date with someone else.\n# Takes a column of data containing birthdates.\ndef paradoxal_count(c):\n with ks.scope(\"p_count\"): # Make it pretty:\n g = c.groupby(c).agg({'num_employees': f.count}, name=\"agg_count\")\n s = f.sum(g.num_employees[g.num_employees>=2], name=\"paradoxical_employees\")\n return s", "This is a simple function. If we wanted to try it, or write tests for it, we would prefer not to have to launch a Spark instance, which comes with some overhead. Let's write a simple test case using Pandas to be confident it is working as expected, and then use it in Spark.\nIt correctly found that 2 people share the same January 1st birth date.", "# A series of birth dates.\ntest_df = pd.Series([\"1/1\", \"3/5\", \"1/1\"])\nparadoxal_count(test_df)", "Now that we have this nice function, let's use against each of the companies in our dataset, with Spark.\nNotice that you can directly plug the function, no need to do translation, etc. This is impossible to do in Spark for complex functions like this one.\nWe get at the end a daframe with the name of the company and the number of employees that share the same birthdate:", "# Now use this to group by companies:\nres = (employees.dob\n .groupby(employees.company_name)\n .agg({\n \"paradoxical_employees\": paradoxal_count\n }))\nres", "This is still a dataframe. Now is the time to collect and see the content:", "o = f.collect(res)\no", "We run it using the session we opened before, and we use compute to inspect how Karps and Spark are evaluating the computations.", "comp = s.compute(o)\ncomp", "Let's look under the hood to see how this gets translated.\nThe transformation is defined using two nested first-orderd functions, that get collected using the FunctionalShuffle operation called shuffle9.", "show_phase(comp, \"initial\")\n\nshow_phase(comp, \"final\")", "After optimization and flattening, the graph actually turns out to be a linear graph with a first shuffle, a filter, a second shuffle and then a final aggregate. You can click around to see how computations are being done.", "show_phase(comp, \"final\")", "And finally the value:", "comp.values()", "As a conclusion, with Karps, you can take any reasonable function and reuse it in arbitrary ways in a functional manner, in a type-safe manner. Karps will write for you the complex SQL queries that you would have to write by hand. All errors are detected well before the actual runtime, which greatly simplifies the debugging.\nLaziness and structured transforms bring to Spark some fundamental characteristics such as modularity, reusability, better testing and fast-fail comprehensive error checking, on top of automatic performance optimizations.", "show_phase(comp, \"parsed\")\n\nshow_phase(comp, \"physical\")\n\nshow_phase(comp, \"rdd\")\n\ncomp.dump_profile(\"karps-trace-2.json\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cuttlefishh/papers
red-sea-single-cell-genomes/code/singlecell_tara_stats.ipynb
mit
[ "Statistics on counts of OGs (columns) in Tara surface samples (rows)\n\nFinding significant OGs across groups of Tara samples using ANCOM\nDetermining whether distribution of percent Tara samples found in differs for subgroups (z-test)\n\nDoes everything separately for pelag and proch data.\nImport libraries", "import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport re\nfrom skbio.stats.composition import ancom\nfrom sys import argv\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Assign variables", "# species = argv[1]\n# evalue = argv[2]\n# clusters_path = argv[3]\n\n# Prochlorococcus results\nspecies = 'proch'\nevalue = '1e-5'\nmyaxis = [0, 64, 0, 0.36]\nclusters_path = '~/singlecell/clusters/orthomcl-pro4/groups.all_pro.list'\n\n# Pelagibacter results\nspecies = 'pelag'\nevalue = '1e-5'\nmyaxis = [0, 64, 0, 0.36]\nclusters_path = '~/singlecell/clusters/orthomcl-sar4/groups.all_sar.list'", "Format and save Tara metadata", "# Tara metadata\ndf_tara_names = pd.read_csv('/Users/luke/singlecell/tara/Tara_Prok139_PANGAEA_Sample.csv')\ndf_tara_metadata = pd.read_csv('/Users/luke/singlecell/tara/Tara_Table_W8.csv')\ndf_tara_metadata = df_tara_names.merge(df_tara_metadata)\n\n# SRF metadata\ndf_tara_metadata.index = df_tara_metadata['Sample label [TARA_station#_environmental-feature_size-fraction]']\nindex_SRF = [index for index in list(df_tara_metadata.index) if 'SRF' in index]\ndf_tara_metadata_SRF = df_tara_metadata.loc[index_SRF]\ndf_tara_metadata_SRF.index = df_tara_metadata_SRF.index\n\n# Latitude column\ndf_tara_metadata_SRF['category_latitude'] = pd.Series(0, index=np.arange(len(df_tara_metadata_SRF.columns)), dtype='object')\nfor index, lat in abs(df_tara_metadata_SRF['Mean_Lat*']).iteritems():\n if lat < 23.5:\n df_tara_metadata_SRF.loc[index, 'category_latitude'] = 'tropical'\n elif lat > 40:\n df_tara_metadata_SRF.loc[index, 'category_latitude'] = 'temperate'\n else:\n df_tara_metadata_SRF.loc[index, 'category_latitude'] = 'subtropical'\n\n# Temperature column\ndf_tara_metadata_SRF['category_temperature'] = pd.Series(0, index=np.arange(len(df_tara_metadata_SRF.columns)), dtype='object')\nfor index, temp in df_tara_metadata_SRF['Mean_Temperature [deg C]*'].iteritems():\n if temp < 10:\n df_tara_metadata_SRF.loc[index, 'category_temperature'] = 'polar'\n elif temp > 20:\n df_tara_metadata_SRF.loc[index, 'category_temperature'] = 'tropical'\n else:\n df_tara_metadata_SRF.loc[index, 'category_temperature'] = 'temperate'\n\n# Red Sea column\ndf_tara_metadata_SRF['category_redsea'] = pd.Series(0, index=np.arange(len(df_tara_metadata_SRF.columns)), dtype='bool')\nfor index in df_tara_metadata_SRF.index:\n if index in ['TARA_031_SRF_0.22-1.6', 'TARA_031_SRF_<-0.22', 'TARA_032_SRF_0.22-1.6', 'TARA_032_SRF_<-0.22', 'TARA_033_SRF_0.22-1.6', 'TARA_034_SRF_0.1-0.22', 'TARA_034_SRF_0.22-1.6', 'TARA_034_SRF_<-0.22']:\n df_tara_metadata_SRF.loc[index, 'category_redsea'] = True\n else:\n df_tara_metadata_SRF.loc[index, 'category_redsea'] = False\n\n# export mapping file\ndf_tara_metadata_SRF.to_csv('tara_metadata_SRF.tsv', sep='\\t')", "Format and save count data", "# Paths of input files, containing cluster counts in Tara samples\npaths = pd.Series.from_csv('/Users/luke/singlecell/tara/paths_%s_%s.list' % (species, evalue), header=-1, sep='\\t', index_col=None)\n\n# Data frame of non-zero cluster counts in Tara samples (NaN if missing in sample but found in others)\npieces = []\nfor path in paths:\n fullpath = \"/Users/luke/singlecell/tara/PROK-139/%s\" % path\n counts = pd.DataFrame.from_csv(fullpath, header=-1, sep='\\t', index_col=0)\n pieces.append(counts)\ndf_nonzero = pd.concat(pieces, axis=1)\nheadings = paths.tolist()\ndf_nonzero.columns = headings\n\n# SRF dataframe, transposed, zeros, plus 1, renamed indexes\ncol_SRF = [col for col in list(df_nonzero.columns) if 'SRF' in col]\ndf_nonzero_SRF = df_nonzero[col_SRF]\ndf_nonzero_SRF_T = df_nonzero_SRF.transpose()\ndf_nonzero_SRF_T.fillna(0, inplace=True)\ndf_nonzero_SRF_T_plusOne = df_nonzero_SRF_T + 1\ndf_nonzero_SRF_T_plusOne.index = [re.sub(species, 'TARA', x) for x in df_nonzero_SRF_T_plusOne.index]\ndf_nonzero_SRF_T_plusOne.index = [re.sub('_1e-5', '', x) for x in df_nonzero_SRF_T_plusOne.index]\n\n# Dataframe of all clusters (includes clusters missing from Tara)\nclusters = pd.Series.from_csv(clusters_path, header=-1, sep='\\t', index_col=None)\ndf_all = df_nonzero.loc[clusters]\ndf_all_SRF = df_all[col_SRF]\ndf_all_SRF_T = df_all_SRF.transpose()\ndf_all_SRF_T.fillna(0, inplace=True)\n\n# remove '1e-5' from count indexes\ndf_nonzero_SRF_T.index = [re.sub('_1e-5', '', x) for x in df_nonzero_SRF_T.index]\ndf_all_SRF_T.index = [re.sub('_1e-5', '', x) for x in df_all_SRF_T.index]\n\n# export counts to file\ndf_nonzero_SRF_T.to_csv('tara_%s_nonzero_SRF.csv' % species)\ndf_all_SRF_T.to_csv('tara_%s_all_SRF.csv' % species)", "ANCOM", "# ANCOM with defaults alpha=0.05, tau=0.02, theta=0.1\n# for grouping in ['category_latitude', 'category_temperature', 'category_redsea']:\n# results = ancom(df_nonzero_SRF_T_plusOne, df_tara_metadata_SRF[grouping], multiple_comparisons_correction='holm-bonferroni')\n# results.to_csv('ancom.%s_nonzero_SRF_T_plusOne.%s.csv' % (species, grouping))", "Z-test", "# lookup dict for genus name\ndg = {\n 'pelag': 'Pelagibacter',\n 'proch': 'Prochlorococcus'\n}\n\n# load OG metadata to determine RS-only OGs\ndf_og_metadata = pd.read_csv('/Users/luke/singlecell/notebooks/og_metadata.tsv', sep='\\t', index_col=0)\n\nog_rs = df_og_metadata.index[(df_og_metadata['Red_Sea_only'] == True) & (df_og_metadata['genus'] == dg[species])]\nog_other = df_og_metadata.index[(df_og_metadata['Red_Sea_only'] == False) & (df_og_metadata['genus'] == dg[species])]\n\ndf_all_SRF_T_rs = df_all_SRF_T[og_rs]\ndf_all_SRF_T_other = df_all_SRF_T[og_other]\n\ncount = (df_all_SRF_T > 0).sum()\ncount_rs = (df_all_SRF_T_rs > 0).sum()\ncount_other = (df_all_SRF_T_other > 0).sum()\n\n# save count data\ncount.to_csv('hist_counts_%s_ALL_og_presence_absence_in_63_tara_srf.csv' % species)\ncount_rs.to_csv('hist_counts_%s_RSassoc_og_presence_absence_in_63_tara_srf.csv' % species)\n\nnum_samples = df_all_SRF_T.shape[0]\nnum_ogs = max_bin = df_all_SRF_T.shape[1]\nnum_ogs_rsonly = count_rs.shape[0]\nnum_ogs_other = count_other.shape[0]\n\n# all OGs AND RS-assoc OGs\nplt.figure(figsize=(10,10))\nsns.distplot(count_rs, bins=np.arange(num_samples+2), color=sns.xkcd_rgb['orange'], label='Red Sea-associated ortholog groups (%s)' % num_ogs_rsonly)\nsns.distplot(count, bins=np.arange(num_samples+2), color=sns.xkcd_rgb['blue'], label='All %s ortholog groups (%s)' % (dg[species], num_ogs))\nplt.xlabel('Number of %s Tara surface samples found in' % num_samples, fontsize=18)\nplt.ylabel('Proportion of ortholog groups', fontsize=18)\nplt.xticks(np.arange(0,num_samples+1,10)+0.5, ('0', '10', '20', '30', '40', '50', '60'), fontsize=14)\nplt.yticks(fontsize=14)\nplt.legend(fontsize=16, loc='upper left')\nplt.axis(myaxis)\nplt.savefig('hist_%s_paper_og_presence_absence_in_63_tara_srf.pdf' % species)\n\n# all OGs\nplt.figure(figsize=(8,6))\nsns.distplot(count, bins=num_samples+1)\nplt.axis([-0, num_samples, 0, .35])\nplt.xlabel('Number of %s Tara surface samples found in' % num_samples)\nplt.ylabel('Proportion of %s OGs' % num_ogs)\nplt.title('Presence/absence of all %s %s OGs in %s Tara surface samples' % (num_ogs, species, num_samples))\nplt.axis(myaxis)\nplt.savefig('hist_%s_all_og_presence_absence_in_63_tara_srf.pdf' % species)\n\n# RS-assoc OGs\nplt.figure(figsize=(8,6))\nsns.distplot(count_rs, bins=num_samples+1)\nplt.axis([-0, num_samples, 0, .25])\nplt.xlabel('Number of %s Tara surface samples found in' % num_samples)\nplt.ylabel('Proportion of %s OGs' % num_ogs_rsonly)\nplt.title('Presence/absence of %s RS-assoc. %s OGs in %s Tara surface samples' % (num_ogs_rsonly, species, num_samples))\nplt.axis(myaxis)\nplt.savefig('hist_%s_RSassoc_og_presence_absence_in_63_tara_srf.pdf' % species)\n\n# other (non-RS-assoc) OGs\nplt.figure(figsize=(8,6))\nsns.distplot(count_other, bins=num_samples+1)\nplt.axis([0, num_samples, 0, .4])\nplt.xlabel('Number of %s Tara surface samples found in' % num_samples)\nplt.ylabel('Proportion of %s OGs' % num_ogs_other)\nplt.title('Presence/absence of %s non-RS-assoc. %s OGs in %s Tara surface samples' % (num_ogs_other, species, num_samples))\nplt.axis(myaxis)\nplt.savefig('hist_%s_nonRSassoc_og_presence_absence_in_63_tara_srf.pdf' % species)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
otavio-r-filho/AIND-Deep_Learning_Notebooks
embeddings/Skip-Grams-Solution.ipynb
mit
[ "Skip-gram word2vec\nIn this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.\nReadings\nHere are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.\n\nA really good conceptual overview of word2vec from Chris McCormick \nFirst word2vec paper from Mikolov et al.\nNIPS paper with improvements for word2vec also from Mikolov et al.\nAn implementation of word2vec from Thushan Ganegedara\nTensorFlow word2vec tutorial\n\nWord embeddings\nWhen you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. \n\nTo solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the \"on\" input unit.\n\nInstead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example \"heart\" is encoded as 958, \"mind\" as 18094. Then to get hidden layer values for \"heart\", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.\n<img src='assets/tokenize_lookup.png' width=500>\nThere is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.\nEmbeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.\nWord2Vec\nThe word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as \"black\", \"white\", and \"red\" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.\n<img src=\"assets/word2vec_architectures.png\" width=\"500\">\nIn this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.\nFirst up, importing packages.", "import time\n\nimport numpy as np\nimport tensorflow as tf\n\nimport utils", "Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport zipfile\n\ndataset_folder_path = 'data'\ndataset_filename = 'text8.zip'\ndataset_name = 'Text8 Dataset'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(dataset_filename):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:\n urlretrieve(\n 'http://mattmahoney.net/dc/text8.zip',\n dataset_filename,\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with zipfile.ZipFile(dataset_filename) as zip_ref:\n zip_ref.extractall(dataset_folder_path)\n \nwith open('data/text8') as f:\n text = f.read()", "Preprocessing\nHere I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.", "words = utils.preprocess(text)\nprint(words[:30])\n\nprint(\"Total words: {}\".format(len(words)))\nprint(\"Unique words: {}\".format(len(set(words))))", "And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word (\"the\") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.", "vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)\nint_words = [vocab_to_int[word] for word in words]", "Subsampling\nWords that show up often such as \"the\", \"of\", and \"for\" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by \n$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\nwhere $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.\nI'm going to leave this up to you as an exercise. Check out my solution to see how I did it.\n\nExercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words.", "from collections import Counter\nimport random\n\nthreshold = 1e-5\nword_counts = Counter(int_words)\ntotal_count = len(int_words)\nfreqs = {word: count/total_count for word, count in word_counts.items()}\np_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}\ntrain_words = [word for word in int_words if random.random() < (1 - p_drop[word])]", "Making batches\nNow that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. \nFrom Mikolov et al.: \n\"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels.\"\n\nExercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.", "def get_target(words, idx, window_size=5):\n ''' Get a list of words in a window around an index. '''\n \n R = np.random.randint(1, window_size+1)\n start = idx - R if (idx - R) > 0 else 0\n stop = idx + R\n target_words = set(words[start:idx] + words[idx+1:stop+1])\n \n return list(target_words)", "Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.", "def get_batches(words, batch_size, window_size=5):\n ''' Create a generator of word batches as a tuple (inputs, targets) '''\n \n n_batches = len(words)//batch_size\n \n # only full batches\n words = words[:n_batches*batch_size]\n \n for idx in range(0, len(words), batch_size):\n x, y = [], []\n batch = words[idx:idx+batch_size]\n for ii in range(len(batch)):\n batch_x = batch[ii]\n batch_y = get_target(batch, ii, window_size)\n y.extend(batch_y)\n x.extend([batch_x]*len(batch_y))\n yield x, y\n ", "Building the graph\nFrom Chris McCormick's blog, we can see the general structure of our network.\n\nThe input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.\nThe idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.\nI'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.\n\nExercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.", "train_graph = tf.Graph()\nwith train_graph.as_default():\n inputs = tf.placeholder(tf.int32, [None], name='inputs')\n labels = tf.placeholder(tf.int32, [None, None], name='labels')", "Embedding\nThe embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \\times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.\n\nExercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.", "n_vocab = len(int_to_vocab)\nn_embedding = 200 # Number of embedding features \nwith train_graph.as_default():\n embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, inputs)", "Negative sampling\nFor every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called \"negative sampling\". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.\n\nExercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.", "# Number of negative labels to sample\nn_sampled = 100\nwith train_graph.as_default():\n softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))\n softmax_b = tf.Variable(tf.zeros(n_vocab))\n \n # Calculate the loss using negative sampling\n loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, \n labels, embed,\n n_sampled, n_vocab)\n \n cost = tf.reduce_mean(loss)\n optimizer = tf.train.AdamOptimizer().minimize(cost)", "Validation\nThis code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.", "with train_graph.as_default():\n ## From Thushan Ganegedara's implementation\n valid_size = 16 # Random set of words to evaluate similarity on.\n valid_window = 100\n # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent \n valid_examples = np.array(random.sample(range(valid_window), valid_size//2))\n valid_examples = np.append(valid_examples, \n random.sample(range(1000,1000+valid_window), valid_size//2))\n\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))\n normalized_embedding = embedding / norm\n valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)\n similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))\n\n# If the checkpoints directory doesn't exist:\n!mkdir checkpoints\n\nepochs = 10\nbatch_size = 1000\nwindow_size = 10\n\nwith train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n iteration = 1\n loss = 0\n sess.run(tf.global_variables_initializer())\n\n for e in range(1, epochs+1):\n batches = get_batches(train_words, batch_size, window_size)\n start = time.time()\n for x, y in batches:\n \n feed = {inputs: x,\n labels: np.array(y)[:, None]}\n train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n \n loss += train_loss\n \n if iteration % 100 == 0: \n end = time.time()\n print(\"Epoch {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Avg. Training loss: {:.4f}\".format(loss/100),\n \"{:.4f} sec/batch\".format((end-start)/100))\n loss = 0\n start = time.time()\n \n if iteration % 1000 == 0:\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = int_to_vocab[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = 'Nearest to %s:' % valid_word\n for k in range(top_k):\n close_word = int_to_vocab[nearest[k]]\n log = '%s %s,' % (log, close_word)\n print(log)\n \n iteration += 1\n save_path = saver.save(sess, \"checkpoints/text8.ckpt\")\n embed_mat = sess.run(normalized_embedding)", "Restore the trained network if you need to:", "with train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n embed_mat = sess.run(embedding)", "Visualizing the word vectors\nBelow we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE\n\nviz_words = 500\ntsne = TSNE()\nembed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])\n\nfig, ax = plt.subplots(figsize=(14, 14))\nfor idx in range(viz_words):\n plt.scatter(*embed_tsne[idx, :], color='steelblue')\n plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yw-fang/readingnotes
machine-learning/Hitchhiker-guide-2016/ch05.ipynb
apache-2.0
[ "Reading Great Code\nCommom Features of great code\n每个定义的函数包含的代码大多不超过20行;含有很多空行;对于交互型代码(例如Requests和Flask等),都有大量的doctoring或者comment,综合下来五分之一的代码内容是可以作为文档使用的。不过像HowDoI这种直接的不是用语交互的代码,并没有必要去包含大量的comment。\n下面,我们就跟着学习下如何读不同风格的代码。\nHowDoI\nHowDoI,code代码总数只有300行左右,可以作为阅读代码的首选。\nReading a single-file script\nscirpt通常有一个清晰的starting point和一个清晰的ending point,以及其他定义清晰的操作。这使得sciprt会比一般的提供API的库\n更容易follow。\n关于howdoi,可以Google下,然后从github下载到。\n安装: pip install --editable . \nunit test: python test_howdoi.py\nRead howdoi's documentation\nHowDoI的文档是README.rst文件,从中可以看出 HowDoI是一个很小的命令行应用,它可以允许使用者从互联网上获取关于编程问题的答案。", "!which howdoi\n\n!howdoi --help #注意我这里是在jupyter notebook里面直接使用的,所以需要加感叹号。如果是在terminal上,不需要加叹号。", "通过帮助文档,我们可以了解到HowDoI大概的工作模式以及它的一些功能,例如可以colorize the output,get multiple answers,\nkeep answers in a cache that can be clared等。\nUse HowDoI", "!howdoi --num-answers 3 python lambda function list comprehension\n\n!howdoi --num-answer 3 python numpy array create", "Read HowDoI's code\n在howdoi的目录中,除了__pycache__之外其实只有两个文件,即__init__.py 和 howdoi.py。\n前者只有一行,包含了版本信息;而后者则是我们即将精读的代码。", "!ls /Users/ywfang/FANG/git/howdoi_ywfang/howdoi", "通过浏览howdoi.py,我们发现这里面定义了很多新的函数,而且每个函数都会在之后的函数中被引用,这是的我们可以方便follow。\n其中的main function,即 command_line_runner()接近于 howdoi.py的底部", "!sed -n '70,120p' /Users/ywfang/FANG/git/howdoi_ywfang/howdoi/howdoi.py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
suvarchal/JyIDV
examples/CreateFunctionFormulas.ipynb
mit
[ "Write a Jython function for IDV and export as IDV Formula in GUI\n\nDefine a function to calculate Moist Static Energy from Temperature, Specific Humidity and Geopotential Height.", "def moistStaticEnergy(T,Q,GZ):\n \"\"\" Calculates Moist Static Energy with Temperature, Specific Humidity and Geopotential Height. \"\"\"\n from ucar.visad.quantities import SpecificHeatCapacityOfDryAirAtConstantPressure,LatentHeatOfEvaporation\n cp=SpecificHeatCapacityOfDryAirAtConstantPressure.newReal()\n L=LatentHeatOfEvaporation.newReal()\n return cp*T+L*Q+Z", "Above function was created for use in this session, it will not be available for IDV in next session so let us save it to the IDV Jython library.", "saveJython(moistStaticEnergy)", "Create a IDV formula, once created it will be in the list of formulas in IDV. The arguments to saveFormula are (formulaid,description,functionastring,formula categories). formula categories can be a list of categories or just a single category specified by a string.", "saveFormula(\"Moist Static Energy\",\"Moist Static Energy from T, Q, GZ\",\"moistStaticEnergy(T,Q,GZ)\",[\"Grids\",\"Grids-MapesCollection\"])", "Check if the formula was created in IDV GUI . At anytime to show a IDV window from notebook use function showIdv(). Currently some displays cannot be made when using GUI from notebook, will be implemented in future.", "showIdv()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
KshitijT/fundamentals_of_interferometry
6_Deconvolution/6_5_source_finding.ipynb
gpl-2.0
[ "Outline\nGlossary\n6. Deconvolution in Imaging \nPrevious: 6.3 Residuals and Image Quality \nNext: 6.x Further Reading and References", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS\n\nimport matplotlib\nfrom scipy import optimize\nimport astropy.io.fits\n\nmatplotlib.rcParams.update({'font.size': 18})\nmatplotlib.rcParams.update({'figure.figsize': [12,8]} )", "6.5 Source Finding\nIn radio astronomy, source finding is the process through which the attributes of radio sources -- such as flux density and mophorlogy -- are measured from data. In this section we will only cover source finding in the image plane.\nSource finding techniques usually involve four steps, i) charecterizing the noise (or background estimation), ii) thresholding the data based on knowledge of the noise, iii) finding regions in the thresholded image with \"similar\" neighbouring pixels (this is that same as blob detection in image processing), and iv) parameterizing these 'blobs' through a function (usually a 2D Gaussian). The source attributes are then estimated from the parameterization of the blobs.\n6.5.1 Noise Charecterization\nAs mentioned before, the radio data we process with source finders is noisy. To charecterize this noise we need to make a few assumptions about its nature, namely we assume that the niose results from some stochastic process and that it can be described by a normal distribution\n$$ G(x \\, | \\, \\mu,\\sigma^2) = \\frac{1}{\\sigma \\sqrt{2\\pi}}\\text{exp}\\left( \\frac{-(x-\\mu)^2}{2\\sigma^2}\\right) $$\nwhere, $\\mu$ is the mean (or expected value) of the variable $x$, and $\\sigma^2$ is the variance of the distribution; $\\sigma$ is the standard deviation. Hence, the noise can be parameterized through the mean and the standard deviation. Let us illustrate this with an example. Bellow is a noise image from a MeerKAT simulation, along with a histogram of of the pixels (in log space).", "noise_image = \"../data/fits/noise_image.fits\"\nwith astropy.io.fits.open(noise_image) as hdu:\n data = hdu[0].data[0,0,...]\n\nfig, (image, hist) = plt.subplots(1, 2, figsize=(18,6))\nhistogram, bins = np.histogram(data.flatten(), bins=401)\n\ndmin = data.min()\ndmax = data.max()\nx = np.linspace(dmin, dmax, 401)\n\nim = image.imshow(data)\n\nmean = data.mean()\nsigma = data.std()\npeak = histogram.max()\n\ngauss = lambda x, amp, mean, sigma: amp*np.exp( -(x-mean)**2/(2*sigma**2))\n\nfitdata = gauss(x, peak, mean, sigma)\n\nplt.plot(x, fitdata)\nplt.plot(x, histogram, \"o\")\nplt.yscale('log')\nplt.ylim(1)", "Now, in reality the noise has to measured in the presence of astrophysical emission. Furthermore, radio images are also contaminated by various instrumental effects which can manifest as spurious emission in the image domain. All these factors make it difficult to charercterize the noise in a synthesized image. Since the noise generally dominates the images, the mean and standard deviation of the entire image are still fairly good approximations of the noise. Let us now insert a few sources (image and flux distribution shown below) in the noise image from earlier and then try to estimate noise.", "noise_image = \"../data/fits/star_model_image.fits\"\nwith astropy.io.fits.open(noise_image) as hdu:\n data = hdu[0].data[0,0,...]\n\nfig, (image, hist) = plt.subplots(1, 2, figsize=(18,6))\nhistogram, bins = np.histogram(data.flatten(), bins=101)\n\n\ndmin = data.min()\ndmax = data.max()\nx = np.linspace(dmin, dmax, 101)\n\nim = image.imshow(data)\n\nmean = data.mean()\nsigma_std = data.std()\n\npeak = histogram.max()\n\ngauss = lambda x, amp, mean, sigma: amp*np.exp( -(x-mean)**2/(2*sigma**2))\n\nfitdata_std = gauss(x, peak, mean, sigma_std)\n\nplt.plot(x, fitdata_std, label=\"STD DEV\")\n\nplt.plot(x, histogram, \"o\", label=\"Data\")\nplt.legend(loc=1)\n\nplt.yscale('log')\nplt.ylim(1)", "The pixel statistics of the image are no longer Gaussian as apparent from the long trail of the flux distribution. Constructing a Gaussian model from the mean and standard deviation results in a poor fit (blue line in the figure on the right). A better method to estimate the variance is to measure the dispersion of the data points about the mean (or median), this is the mean/median absolute deviation (MAD) technique. We will refer to the to median absolute deviation as the MAD Median, and the mean absolute deviation as the MAD Mean. A synthesis imaging specific method to estimate the variance of the noise is to only consider the negative pixels. This works under the assumption that all the astrophysical emission (at least in Stokes I) has a positive flux density. The Figure below shows noise estimates from methods mentioned above.", "mean = data.mean()\nsigma_std = data.std()\nsigma_neg = data[data<0].std() * 2\nmad_mean = lambda a: np.mean( abs(a - np.mean(a) ))\nsigma_mad_median = np.median( abs(data - np.median(data) ))\n\nmad_mean = lambda a: np.mean( abs(a - np.mean(a) ))\nsigma_mad_mean = mad_mean(data)\n\npeak = histogram.max()\n\ngauss = lambda x, amp, mean, sigma: amp*np.exp( -(x-mean)**2/(2*sigma**2))\n\nfitdata_std = gauss(x, peak, mean, sigma_std)\nfitdata_mad_median = gauss(x, peak, mean, sigma_mad_median)\nfitdata_mad_mean = gauss(x, peak, mean, sigma_mad_mean)\nfitdata_neg = gauss(x, peak, mean, sigma_neg)\n\nplt.plot(x, fitdata_std, label=\"STD DEV\")\nplt.plot(x, fitdata_mad_median, label=\"MAD Median\")\nplt.plot(x, fitdata_mad_mean, label=\"MAD Mean\")\nplt.plot(x, fitdata_neg, label=\"Negative STD DEV\")\nplt.plot(x, histogram, \"o\", label=\"Data\")\nplt.legend(loc=1)\n\nplt.yscale('log')\nplt.ylim(1)", "The MAD and negtive value standard deviation methods produce a better solution to the noise distribution in the presence of sources.\n6.5.2 Blob Detection and Charercterization\nOnce the noise has been estimated, the next step is to find and charecterize sources in the image. Generically in image processing this is known as blob detection. In a simple case during synthesis imaging we define a blob as a group contiguous pixels whose spatial intensity profile can be modelled by a 2D Gaussian function. Of course, more advanced functions could be used. Generally, we would like to group together near by pixels, such as spatially 'close' sky model components from deconvolution, into a single complex source. Our interferometric array has finite spatial resolution, so we can further constrain our blobs not to be significantly smaller than the image resolution. We define two further constraints of a blob, the peak and boundary thresholds. The peak threshold, defined as\n$$ \n \\sigma_\\text{peak} = n * \\sigma,\n$$\nis the minimum intensity the maximum pixel in a blob must have relative to the image noise. That is, all blobs with peak pixel lower than $\\sigma_\\text{peak}$ will be excluded from being considered sources. And the boundary threshold\n$$\n \\sigma_\\text{boundary} = m * \\sigma,\n$$\ndefines the boundary of a blob, $m$ and $n$ are natural numbers with $m$ < $n$. \n6.5.2.1 A simple source finder\nWe are now in a position to write a simple source finder. To do so we implement the following steps: \n\nEstimate the image noise and set peak and boundary threshold values.\nBlank out all pixel values below the boundary value.\nFind Peaks in image.\nFor each peak, fit a 2D Gaussian and subtract the Gaussian fit from the image.\nRepeat until the image has no pixels above the detection threshold.", "def gauss2D(x, y, amp, mean_x, mean_y, sigma_x, sigma_y):\n \"\"\" Generate a 2D Gaussian image\"\"\"\n gx = -(x - mean_x)**2/(2*sigma_x**2)\n gy = -(y - mean_y)**2/(2*sigma_y**2)\n \n return amp * np.exp( gx + gy)\n\ndef err(p, xx, yy, data):\n \"\"\"2D Gaussian error function\"\"\"\n return gauss2D(xx.flatten(), yy.flatten(), *p) - data.flatten()\n\ndef fit_gaussian(data, psf_pix):\n \"\"\"Fit a gaussian to a 2D data set\"\"\"\n \n width = data.shape[0]\n mean_x, mean_y = width/2, width/2\n amp = data.max()\n sigma_x, sigma_y = psf_pix, psf_pix\n params0 = amp, mean_x, mean_y, sigma_x,sigma_y\n \n npix_x, npix_y = data.shape\n x = np.linspace(0, npix_x, npix_x)\n y = np.linspace(0, npix_y, npix_y)\n xx, yy = np.meshgrid(x, y)\n \n \n params, pcov, infoDict, errmsg, sucess = optimize.leastsq(err, \n params0, args=(xx.flatten(), yy.flatten(),\n data.flatten()), full_output=1)\n \n \n perr = abs(np.diagonal(pcov))**0.5\n model = gauss2D(xx, yy, *params)\n \n return params, perr, model\n\ndef source_finder(data, peak, boundary, width, psf_pix):\n \"\"\"A simple source finding tool\"\"\"\n \n # first we make an estimate of the noise. Lets use the MAD mean\n sigma_noise = mad_mean(data)\n\n # Use noise estimate to set peak and boundary thresholds\n peak_sigma = sigma_noise*peak\n boundary_sigma = sigma_noise*boundary\n \n # Pad the image to avoid hitting the edge of the image\n pad = width*2\n residual = np.pad(data, pad_width=((pad, pad), (pad, pad)), mode=\"constant\")\n model = np.zeros(residual.shape)\n \n # Create slice to remove the padding later on\n imslice = [slice(pad, -pad), slice(pad,-pad)]\n \n catalog = [] \n \n # We will need to convert the fitted sigma values to a width\n FWHM = 2*np.sqrt(2*np.log(2))\n \n while True:\n \n # Check if the brightest pixel is at least as bright as the sigma_peak\n # Otherwise stop.\n max_pix = residual.max()\n if max_pix<peak_sigma:\n break\n \n xpix, ypix = np.where(residual==max_pix)\n xpix = xpix[0] # Get first element\n ypix = ypix[0] # Get first element\n \n # Make slice that selects box of size width centred around bright brightest pixel\n subim_slice = [ slice(xpix-width/2, xpix+width/2),\n slice(ypix-width/2, ypix+width/2) ]\n \n # apply slice to get subimage\n subimage = residual[subim_slice]\n \n \n # blank out pixels below the boundary threshold\n mask = subimage > boundary_sigma\n \n # Fit gaussian to submimage\n params, perr, _model = fit_gaussian(subimage*mask, psf_pix)\n \n amp, mean_x, mean_y, sigma_x,sigma_y = params\n amp_err, mean_x_err, mean_y_err, sigma_x_err, sigma_y_err = perr\n \n # Remember to reposition the source in original image\n pos_x = xpix + (width/2 - mean_x) - pad\n pos_y = ypix + (width/2 - mean_y) - pad\n \n # Convert sigma values to FWHM lengths\n size_x = FWHM*sigma_x\n size_y = FWHM*sigma_y\n \n # Add modelled source to model image\n model[subim_slice] = _model\n \n # create new source\n source = (\n amp,\n pos_x,\n pos_y,\n size_x,\n size_y\n )\n \n # add source to catalogue\n catalog.append(source)\n \n # update residual image\n residual[subim_slice] -= _model \n \n return catalog, model[imslice], residual[imslice], sigma_noise\n", "Using this source finder we can produce a sky model which contains all 17 sources in our test image from earlier in the section.", "test_image = \"../data/fits/star_model_image.fits\"\nwith astropy.io.fits.open(test_image) as hdu:\n data = hdu[0].data[0,0,...]\n \ncatalog, model, residual, sigma_noise = source_finder(data, 5, 2, 50, 10)\n\nprint \"Peak_Flux Pix_x Pix_y Size_x Size_y\"\nfor source in catalog:\n print \" %.4f %.1f %.1f %.2f %.2f\"%source\n\nfig, (img, mod, res) = plt.subplots(1, 3, figsize=(24,12))\nvmin, vmax = sigma_noise, data.max()\n\nim = img.imshow(data, vmin=vmin, vmax=vmax)\nimg.set_title(\"Data\")\n\nmod.imshow(model, vmin=vmin, vmax=vmax)\nmod.set_title(\"Model\")\n\nres.imshow(residual, vmin=vmin, vmax=vmax)\nres.set_title(\"Residual\")\n\ncbar_ax = fig.add_axes([0.92, 0.25, 0.02, 0.5])\nfig.colorbar(im, cax=cbar_ax, format=\"%.2g\")", "The flux and position on each source varies from the true sky model due to the image noise and distribution. The source finding algorithm we above is heuristic example. It has two major flaws : i) it is capable to handling a situation where two or more sources are close enough to each other that would fall within the same sub-image from which the source parameters are estimated, and ii) the noise in radio images is often non-uniform and 'local' noise estimates are required in order to set thresholds. More advanced source finders are used to work on specific source types such as extended objects and line spectra.\n\nNext: 6.x Further Reading and References\n<div class=warn><b>Future Additions:</b></div>\n\n\ndescribe MAD and negative standard deviation methods\nfigure titles and labels\ndiscussion on source finders commonly in use\nexample: change the background noise or threshold values\nexample: kat-7 standard image after deconvolution\nexample: complex extended source\nexample: location-dependent noise variations" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fccoelho/Curso_Blockchain
lectures/intro_cripto.ipynb
lgpl-3.0
[ "Introdução à criptografia e às funções Hash\nAs criptomoedas, como o Bitcoin, utilizam-se de tecnologias criptográficas como criptografia de chave publica,e funções de Hash. Neste notebook vamos nos familiarizar com estes conceitos que nos serão úteis em nosso estudo da bitcoin e outras criptomoedas.\nFunções de Hash Criptográfico\nAs funções de Hash criptográfico são o componentes mais fundamental da maioria das blockchains pois é a \"cola\" que garante a coesão, correção, imutabilidade e outras características fundamentais das blockchains.\nUma função de Hash é uma função que apresenta algumas características básicas:\n\né fácil de calcular para qualquer tipo de dado (baixo custo computacional)\nÉ impossível ou extremamente difícil de inverter, isto é, de encontrar o input correspondente a um hash.\nÉ extremamente improvável que dois inputs diferentes gerem o mesmo valor de hash.\n\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/2/2b/Cryptographic_Hash_Function.svg/740px-Cryptographic_Hash_Function.svg.png\" width=\"30%\"/>\nA biblioteca padrão do Python nos oferece uma biblioteca com implementações das principais funções de hash, a Hashlib.", "import hashlib\nhashlib.algorithms_available", "Criptografia com curvas elípticas\nA Bitcoin se utiliza de curvas elípticas para suas necessidades criptográficas. Mais precisamente, utiliza o algoritmo de assinatura digital por curvas elipticas (ECDSA). A ECDSA envolve três componentes principais: uma chave pública, uma chave privada e assinatura.\nA Bitcoin usa uma curva elíptica específica chamada secp256k1. A função em si parece inofensiva: $$y^2=x^3+7$$ onde $4a^3 +27b^2 \\neq 0$ (para excluir curvas singulares.\n$$\\begin{array}{rcl}\n \\left{(x, y) \\in \\mathbb{R}^2 \\right. & \\left. | \\right. & \\left. y^2 = x^3 + ax + b, \\right. \\\n & & \\left. 4a^3 + 27b^2 \\ne 0\\right}\\ \\cup\\ \\left{0\\right}\n\\end{array}$$\n<img src=\"http://andrea.corbellini.name/images/curves.png\" width=\"30%\" align=\"right\"/>\nPorém, em aplicações criptográficas, esta função não é definida sobre os números reais, mas sobre um campo de números primos: mais precisamente ${\\cal Z}$ modulo $2^{256} - 2^{32} - 977$. \n\\begin{array}{rcl}\n \\left{(x, y) \\in (\\mathbb{F}_p)^2 \\right. & \\left. | \\right. & \\left. y^2 \\equiv x^3 + ax + b \\pmod{p}, \\right. \\\n & & \\left. 4a^3 + 27b^2 \\not\\equiv 0 \\pmod{p}\\right}\\ \\cup\\ \\left{0\\right}\n\\end{array}\nPara um maior aprofundamento sobre a utilização de curvas elítpicas em criptografia leia este material.\nEncriptando textos\nA forma mais simples de criptografia é a criptografia simétrica, na qual se utilizando de uma chave gerada aleatóriamente, converte um texto puro em um texto encriptado. então de posse da mesma chave é possível inverter a operação, recuperando o texto original. Quando falamos em texto aqui estamos falando apenas de uma aplicação possível de criptografia. Na verdade o que será aplicado aqui para textos, pode ser aplicado para qualquer sequencia de bytes, ou seja para qualquer objeto digital.", "from Crypto.Cipher import DES3\nfrom Crypto import Random", "Neste exemplo vamos usar o algoritmo conhecido como \"triplo DES\" para encriptar e desencriptar um texto. Para este exemplo a chave deve ter um comprimento múltiplo de 8 bytes.", "chave = b\"chave secreta um\"\nsal = Random.get_random_bytes(8)\ndes3 = DES3.new(chave, DES3.MODE_CFB, sal)", "Note que adicionamos sal à ao nosso encriptador. o \"sal\" é uma sequência aleatória de bytes feitar para dificultar ataques.", "texto = b\"Este e um texto super secreto que precisa ser protegido a qualquer custo de olhares nao autorizados.\"\nenc = des3.encrypt(texto)\nenc\n\ndes3 = DES3.new(chave, DES3.MODE_CFB, sal)\ndes3.decrypt(enc)", "Um dos problemas com esta metodologia de encriptação, é que se você deseja enviar este arquivo encriptado a um amigo, terá que encontrar uma forma segura de lhe transmitir a chave, caso contrário um inimigo mal intencionado poderá desencriptar sua mensagem de posse da chave. Para resolver este problema introduzimos um novo métodos de encriptação:\nCriptografia de chave pública\nNesta metodologia temos duas chaves: uma pública e outra privada.", "from Crypto.PublicKey import RSA\nfrom Crypto.Random import get_random_bytes\nfrom Crypto.Cipher import AES, PKCS1_OAEP", "Vamos criar uma chave privada, e também encriptá-la, no caso de termos que mantê-la em algum lugar onde possa ser observada por um terceiro.", "senha = \"minha senha super secreta.\"\nkey = RSA.generate(2048) # Chave privada\nprint(key.exportKey())\nchave_privada_encryptada = key.exportKey(passphrase=senha, pkcs=8, protection=\"scryptAndAES128-CBC\")\n\npublica = key.publickey()\npublica.exportKey()", "De posse da senha podemos recuperar as duas chaves.", "key2 = RSA.import_key(chave_privada_encryptada, passphrase=senha)\nprint(key2==key)\nkey.publickey().exportKey() == key2.publickey().exportKey()", "Agora podemos encriptar algum documento qualquer. Para máxima segurança, vamos usar o protocolo PKCS#1 OAEP com a algoritmo RSA para encriptar assimetricamente uma chave de sessão AES. Esta chave de sessão pode ser usada para encriptar os dados. Vamos usar o modo EAX para permitir a detecção de modificações não autorizadas.", "data = \"Minha senha do banco é 123456\".encode('utf8')\nchave_de_sessão = get_random_bytes(16)\n\n# Encripta a chave de sessão com a a chave RSA pública.\ncifra_rsa = PKCS1_OAEP.new(publica)\nchave_de_sessão_enc = cifra_rsa.encrypt(chave_de_sessão)\n\n# Encrypta os dados.\ncifra_aes = AES.new(chave_de_sessão, AES.MODE_EAX)\ntexto_cifrado, tag = cifra_aes.encrypt_and_digest(data)\ntexto_cifrado", "O destinatário da mensagem pode então desencriptar a mensagem usando a chave privada para desencriptar a chave da sessão, e com esta a mensagem.", "# Desencripta a chave de sessão com a chave privada RSA.\ncifra_rsa = PKCS1_OAEP.new(key)\nchave_de_sessão = cifra_rsa.decrypt(chave_de_sessão_enc)\n\n# Desencripta os dados com a chave de sessão AES\ncifra_aes = AES.new(chave_de_sessão, AES.MODE_EAX, cifra_aes.nonce)\ndata2 = cifra_aes.decrypt_and_verify(texto_cifrado, tag)\nprint(data.decode(\"utf-8\"))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.21/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
bsd-3-clause
[ "%matplotlib inline", "The role of dipole orientations in distributed source localization\nWhen performing source localization in a distributed manner\n(MNE/dSPM/sLORETA/eLORETA),\nthe source space is defined as a grid of dipoles that spans a large portion of\nthe cortex. These dipoles have both a position and an orientation. In this\ntutorial, we will look at the various options available to restrict the\norientation of the dipoles and the impact on the resulting source estimate.\nSee inverse_orientation_constraints for related information.\nLoading data\nLoad everything we need to perform source localization on the sample dataset.", "import mne\nimport numpy as np\nfrom mne.datasets import sample\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\n\ndata_path = sample.data_path()\nevokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif')\nleft_auditory = evokeds[0].apply_baseline()\nfwd = mne.read_forward_solution(\n data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif')\nmne.convert_forward_solution(fwd, surf_ori=True, copy=False)\nnoise_cov = mne.read_cov(data_path + '/MEG/sample/sample_audvis-cov.fif')\nsubject = 'sample'\nsubjects_dir = data_path + '/subjects'\ntrans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'", "The source space\nLet's start by examining the source space as constructed by the\n:func:mne.setup_source_space function. Dipoles are placed along fixed\nintervals on the cortex, determined by the spacing parameter. The source\nspace does not define the orientation for these dipoles.", "lh = fwd['src'][0] # Visualize the left hemisphere\nverts = lh['rr'] # The vertices of the source space\ntris = lh['tris'] # Groups of three vertices that form triangles\ndip_pos = lh['rr'][lh['vertno']] # The position of the dipoles\ndip_ori = lh['nn'][lh['vertno']]\ndip_len = len(dip_pos)\ndip_times = [0]\nwhite = (1.0, 1.0, 1.0) # RGB values for a white color\n\nactual_amp = np.ones(dip_len) # misc amp to create Dipole instance\nactual_gof = np.ones(dip_len) # misc GOF to create Dipole instance\ndipoles = mne.Dipole(dip_times, dip_pos, actual_amp, dip_ori, actual_gof)\ntrans = mne.read_trans(trans_fname)\n\nfig = mne.viz.create_3d_figure(size=(600, 400), bgcolor=white)\ncoord_frame = 'mri'\n\n# Plot the cortex\nfig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,\n trans=trans, surfaces='white',\n coord_frame=coord_frame, fig=fig)\n\n# Mark the position of the dipoles with small red dots\nfig = mne.viz.plot_dipole_locations(dipoles=dipoles, trans=trans,\n mode='sphere', subject=subject,\n subjects_dir=subjects_dir,\n coord_frame=coord_frame,\n scale=7e-4, fig=fig)\n\nmne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.25)", "Fixed dipole orientations\nWhile the source space defines the position of the dipoles, the inverse\noperator defines the possible orientations of them. One of the options is to\nassign a fixed orientation. Since the neural currents from which MEG and EEG\nsignals originate flows mostly perpendicular to the cortex [1]_, restricting\nthe orientation of the dipoles accordingly places a useful restriction on the\nsource estimate.\nBy specifying fixed=True when calling\n:func:mne.minimum_norm.make_inverse_operator, the dipole orientations are\nfixed to be orthogonal to the surface of the cortex, pointing outwards. Let's\nvisualize this:", "fig = mne.viz.create_3d_figure(size=(600, 400))\n\n# Plot the cortex\nfig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,\n trans=trans,\n surfaces='white', coord_frame='head', fig=fig)\n\n# Show the dipoles as arrows pointing along the surface normal\nfig = mne.viz.plot_dipole_locations(dipoles=dipoles, trans=trans,\n mode='arrow', subject=subject,\n subjects_dir=subjects_dir,\n coord_frame='head',\n scale=7e-4, fig=fig)\n\nmne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)", "Restricting the dipole orientations in this manner leads to the following\nsource estimate for the sample data:", "# Compute the source estimate for the 'left - auditory' condition in the sample\n# dataset.\ninv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=True)\nstc = apply_inverse(left_auditory, inv, pick_ori=None)\n\n# Visualize it at the moment of peak activity.\n_, time_max = stc.get_peak(hemi='lh')\nbrain_fixed = stc.plot(surface='white', subjects_dir=subjects_dir,\n initial_time=time_max, time_unit='s', size=(600, 400))", "The direction of the estimated current is now restricted to two directions:\ninward and outward. In the plot, blue areas indicate current flowing inwards\nand red areas indicate current flowing outwards. Given the curvature of the\ncortex, groups of dipoles tend to point in the same direction: the direction\nof the electromagnetic field picked up by the sensors.\nLoose dipole orientations\nForcing the source dipoles to be strictly orthogonal to the cortex makes the\nsource estimate sensitive to the spacing of the dipoles along the cortex,\nsince the curvature of the cortex changes within each ~10 square mm patch.\nFurthermore, misalignment of the MEG/EEG and MRI coordinate frames is more\ncritical when the source dipole orientations are strictly constrained [2]_.\nTo lift the restriction on the orientation of the dipoles, the inverse\noperator has the ability to place not one, but three dipoles at each\nlocation defined by the source space. These three dipoles are placed\northogonally to form a Cartesian coordinate system. Let's visualize this:", "fig = mne.viz.create_3d_figure(size=(600, 400))\n\n# Plot the cortex\nfig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,\n trans=trans,\n surfaces='white', coord_frame='head', fig=fig)\n\n# Show the three dipoles defined at each location in the source space\nfig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,\n trans=trans, fwd=fwd,\n surfaces='white', coord_frame='head', fig=fig)\n\nmne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)", "When computing the source estimate, the activity at each of the three dipoles\nis collapsed into the XYZ components of a single vector, which leads to the\nfollowing source estimate for the sample data:", "# Make an inverse operator with loose dipole orientations\ninv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,\n loose=1.0)\n\n# Compute the source estimate, indicate that we want a vector solution\nstc = apply_inverse(left_auditory, inv, pick_ori='vector')\n\n# Visualize it at the moment of peak activity.\n_, time_max = stc.magnitude().get_peak(hemi='lh')\nbrain_mag = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,\n time_unit='s', size=(600, 400), overlay_alpha=0)", "Limiting orientations, but not fixing them\nOften, the best results will be obtained by allowing the dipoles to have\nsomewhat free orientation, but not stray too far from a orientation that is\nperpendicular to the cortex. The loose parameter of the\n:func:mne.minimum_norm.make_inverse_operator allows you to specify a value\nbetween 0 (fixed) and 1 (unrestricted or \"free\") to indicate the amount the\norientation is allowed to deviate from the surface normal.", "# Set loose to 0.2, the default value\ninv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,\n loose=0.2)\nstc = apply_inverse(left_auditory, inv, pick_ori='vector')\n\n# Visualize it at the moment of peak activity.\n_, time_max = stc.magnitude().get_peak(hemi='lh')\nbrain_loose = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,\n time_unit='s', size=(600, 400), overlay_alpha=0)", "Discarding dipole orientation information\nOften, further analysis of the data does not need information about the\norientation of the dipoles, but rather their magnitudes. The pick_ori\nparameter of the :func:mne.minimum_norm.apply_inverse function allows you\nto specify whether to return the full vector solution ('vector') or\nrather the magnitude of the vectors (None, the default) or only the\nactivity in the direction perpendicular to the cortex ('normal').", "# Only retain vector magnitudes\nstc = apply_inverse(left_auditory, inv, pick_ori=None)\n\n# Visualize it at the moment of peak activity.\n_, time_max = stc.get_peak(hemi='lh')\nbrain = stc.plot(surface='white', subjects_dir=subjects_dir,\n initial_time=time_max, time_unit='s', size=(600, 400))", "References\n.. [1] Hämäläinen, M. S., Hari, R., Ilmoniemi, R. J., Knuutila, J., &\n Lounasmaa, O. V. \"Magnetoencephalography - theory, instrumentation, and\n applications to noninvasive studies of the working human brain\", Reviews\n of Modern Physics, 1993. https://doi.org/10.1103/RevModPhys.65.413\n.. [2] Lin, F. H., Belliveau, J. W., Dale, A. M., & Hämäläinen, M. S. (2006).\n Distributed current estimates using cortical orientation constraints.\n Human Brain Mapping, 27(1), 1–13. http://doi.org/10.1002/hbm.20155" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CartoDB/cartoframes
docs/guides/06-Data-Services.ipynb
bsd-3-clause
[ "Data Services\nYou can connect to CARTO Data Services API directly from CARTOframes. This API consists of a set of location-based functions that can be applied to your data to perform geospatial analyses without leaving the context of your notebook. For instance, you can geocode a pandas DataFrame with addresses on the fly, and then perform a trade area analysis by computing isodistances or isochrones programmatically.\nUsing Data Services requires to be authenticated. For more information about how to authenticate, please read the Authentication guide. For further learning you can also check out the Data Services examples.", "from cartoframes.auth import set_default_credentials\n\nset_default_credentials('creds.json')", "Depending on your CARTO account plan, some of these data services are subject to different quota limitations.\n\nGeocoding\nTo get started, let's read in and explore the Starbucks location data we have. With the Starbucks store data in a DataFrame, we can see that there are two columns that can be used in the geocoding service: name and address. There's also a third column that reflects the annual revenue of the store.", "import pandas as pd\n\ndf = pd.read_csv('http://libs.cartocdn.com/cartoframes/samples/starbucks_brooklyn.csv')\ndf.head()", "Quota consumption\nEach time you run Data Services, quota is consumed. For this reason, we provide the ability to check in advance the amount of credits an operation will consume by using the dry_run parameter when running the service function.\nIt is also possible to check your available quota by running the available_quota function.", "from cartoframes.data.services import Geocoding\n\ngeo_service = Geocoding()\n\ncity_ny = {'value': 'New York'}\ncountry_usa = {'value': 'USA'}\n\n_, geo_dry_metadata = geo_service.geocode(df, street='address', city=city_ny, country=country_usa, dry_run=True)\n\ngeo_dry_metadata\n\ngeo_service.available_quota()\n\ngeo_gdf, geo_metadata = geo_service.geocode(df, street='address', city=city_ny, country=country_usa)", "Let's compare geo_dry_metadata and geo_metadata to see the differences between the information returned with and without the dry_run option. As we can see, this information reflects that all the locations have been geocoded successfully and that it has consumed 10 credits of quota.", "geo_metadata\n\ngeo_service.available_quota()", "If the input data file ever changes, cached results will only be applied to unmodified\nrecords, and new geocoding will be performed only on new or changed records. In order to use cached results, we have to save the results to a CARTO table using the table_name and cached=True parameters.\nThe resulting data is a GeoDataFrame that contains three new columns:\n\ngeometry: The resulting geometry\ngc_status_rel: The percentage of accuracy of each location\ncarto_geocode_hash: Geocode information", "geo_gdf.head()", "In addition, to prevent geocoding records that have been previously geocoded, and thus spend quota unnecessarily, you should always preserve the the_geom and carto_geocode_hash columns generated by the geocoding process.\nThis will happen automatically in these cases:\n\nYour input is a table from CARTO processed in place (without a table_name parameter)\nIf you save your results to a CARTO table using the table_name parameter, and only use the resulting table for any further geocoding.\n\nIf you try to geocode this DataFrame now that it contains both the_geom and the carto_geocode_hash, you will see that the required quota is 0 because it has already been geocoded.", "_, geo_metadata = geo_service.geocode(geo_gdf, street='address', city=city_ny, country=country_usa, dry_run=True)\n\ngeo_metadata.get('required_quota')", "Precision\nThe address column is more complete than the name column, and therefore, the resulting coordinates calculated by the service will be more accurate. If we check this, the accuracy values using the name column are lower than the ones we get by using the address column for geocoding.", "geo_name_gdf, geo_name_metadata = geo_service.geocode(df, street='name', city=city_ny, country=country_usa)\n\ngeo_name_gdf.gc_status_rel.unique()\n\ngeo_gdf.gc_status_rel.unique()", "Visualize the results\nFinally, we can visualize the precision of the geocoded results using a CARTOframes visualization layer.", "from cartoframes.viz import Layer, color_bins_style, popup_element\n\nLayer(\n geo_gdf,\n color_bins_style('gc_status_rel', method='equal', bins=geo_gdf.gc_status_rel.unique().size),\n popup_hover=[popup_element('address', 'Address'), popup_element('gc_status_rel', 'Precision')],\n title='Geocoding Precision'\n)", "Isolines\nThere are two Isoline functions: isochrones and isodistances. In this guide we will use the isochrones function to calculate walking areas by time for each Starbucks store and the isodistances function to calculate the walking area by distance.\nBy definition, isolines are concentric polygons that display equally calculated levels over a given surface area, and they are calculated as the intersection areas from the origin point, measured by:\n\nTime in the case of isochrones\nDistance in the case of isodistances\n\nIsochrones\nFor isochrones, let's calculate the time ranges of 5, 15 and 30 minutes. These ranges are input in seconds, so they will be 300, 900, and 1800 respectively.", "from cartoframes.data.services import Isolines\n\niso_service = Isolines()\n\n_, isochrones_dry_metadata = iso_service.isochrones(geo_gdf, [300, 900, 1800], mode='walk', dry_run=True)", "Remember to always check the quota using dry_run parameter and available_quota method before running the service!", "print('available {0}, required {1}'.format(\n iso_service.available_quota(),\n isochrones_dry_metadata.get('required_quota'))\n)\n\nisochrones_gdf, isochrones_metadata = iso_service.isochrones(geo_gdf, [300, 900, 1800], mode='walk')\n\nisochrones_gdf.head()\n\nfrom cartoframes.viz import Layer, basic_style, basic_legend\n\nLayer(isochrones_gdf, basic_style(opacity=0.5), basic_legend('Isochrones'))", "Isodistances\nFor isodistances, let's calculate the distance ranges of 100, 500 and 1000 meters. These ranges are input in meters, so they will be 100, 500, and 1000 respectively.", "_, isodistances_dry_metadata = iso_service.isodistances(geo_gdf, [100, 500, 1000], mode='walk', dry_run=True)\n\nprint('available {0}, required {1}'.format(\n iso_service.available_quota(),\n isodistances_dry_metadata.get('required_quota'))\n)\n\nisodistances_gdf, isodistances_metadata = iso_service.isodistances(geo_gdf, [100, 500, 1000], mode='walk')\n\nisodistances_gdf.head()\n\nfrom cartoframes.viz import Layer, basic_style, basic_legend\n\nLayer(isodistances_gdf, basic_style(opacity=0.5), basic_legend('Isodistances'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mari-linhares/tensorflow-workshop
code_samples/RNN/colorbot/colorbot_solutions.ipynb
apache-2.0
[ "Colorbot Solutions\nHere are the solutions to the exercises available at the colorbot notebook.\nIn order to compare the models we encourage you to use Tensorboard and also use play_colorbot.py --model_dir=path_to_your_model to play with the models and check how it does with general words other than color words.\nEXERCISE EXPERIMENT\nWhen using experiments you should make sure you repeat the datasets the number of epochs desired since the experiment will \"run the for loop for you\". Also, you can add a parameter to run a number of steps instead, it will run until the dataset ends or the number of steps.\nYou can add this cell to your colorbot notebook and run it.", "# small important detail, to train properly with the experiment you need to\n# repeat the dataset the number of epochs desired\ntrain_input_fn = get_input_fn(TRAIN_INPUT, BATCH_SIZE, num_epochs=40)\n\n# create experiment\ndef generate_experiment_fn(run_config, hparams):\n estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)\n return tf.contrib.learn.Experiment(\n estimator,\n train_input_fn=train_input_fn,\n eval_input_fn=test_input_fn\n )\n\nlearn_runner.run(generate_experiment_fn, run_config=tf.contrib.learn.RunConfig(model_dir='model_dir'))", "EXERCISE DATASET\n\nRun the colorbot experiment and notice the choosen model_dir\nBelow is the input function definition,we don't need some of the auxiliar functions anymore\nAdd this cell and then add the solution to the EXERCISE EXPERIMENT\nchoose a different model_dir and run the cells\nCopy the model_dir of the two models to the same path\ntensorboard --logdir=path", "def get_input_fn(csv_file, batch_size, num_epochs=1, shuffle=True):\n def _parse(line):\n # each line: name, red, green, blue\n # split line\n items = tf.string_split([line],',').values\n\n # get color (r, g, b)\n color = tf.string_to_number(items[1:], out_type=tf.float32) / 255.0\n\n # split color_name into a sequence of characters\n color_name = tf.string_split([items[0]], '')\n length = color_name.indices[-1, 1] + 1 # length = index of last char + 1\n color_name = color_name.values\n return color, color_name, length\n\n def input_fn():\n # https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/data\n dataset = (\n tf.contrib.data.TextLineDataset(csv_file) # reading from the HD\n .skip(1) # skip header\n .map(_parse) # parse text to variables\n .padded_batch(batch_size, padded_shapes=([None], [None], []),\n padding_values=(0.0, chr(0), tf.cast(0, tf.int64)))\n \n .repeat(num_epochs) # repeat dataset the number of epochs\n )\n \n # for our \"manual\" test we don't want to shuffle the data\n if shuffle:\n dataset = dataset.shuffle(buffer_size=100000)\n\n # create iterator\n color, color_name, length = dataset.make_one_shot_iterator().get_next()\n\n features = {\n COLOR_NAME_KEY: color_name,\n SEQUENCE_LENGTH_KEY: length,\n }\n\n return features, color\n return input_fn", "As a result you will see something like:\n\nWe called the original model \"sorted_batch\" and the model using the simplified input function as \"simple_batch\"\nNotice that both models have basically the same loss in the last step, but the \"sorted_batch\" model runs way faster , notice the global_step/sec metric, it measures how many steps the model executes per second. Since the \"sorted_batch\" has a larger global_step/sec it means it trains faster. \nIf you don't belive me you can change Tensorboard to compare the models in a \"relative\" way, this will compare the models over time. See result below.\n\nEXERCISE HYPERPARAMETERS\nThis one is more personal, what you see will depends on what you change in the model.\nBelow is a very simple example we just changed the model to use a GRUCell, just in case...", "def get_model_fn(rnn_cell_sizes,\n label_dimension,\n dnn_layer_sizes=[],\n optimizer='SGD',\n learning_rate=0.01):\n \n def model_fn(features, labels, mode):\n \n color_name = features[COLOR_NAME_KEY]\n sequence_length = tf.cast(features[SEQUENCE_LENGTH_KEY], dtype=tf.int32) # int64 -> int32\n \n # ----------- Preparing input --------------------\n # Creating a tf constant to hold the map char -> index\n # this is need to create the sparse tensor and after the one hot encode\n mapping = tf.constant(CHARACTERS, name=\"mapping\")\n table = tf.contrib.lookup.index_table_from_tensor(mapping, dtype=tf.string)\n int_color_name = table.lookup(color_name)\n \n # representing colornames with one hot representation\n color_name_onehot = tf.one_hot(int_color_name, depth=len(CHARACTERS) + 1)\n \n # ---------- RNN -------------------\n # Each RNN layer will consist of a GRU cell\n rnn_layers = [tf.nn.rnn_cell.GRUCell(size) for size in rnn_cell_sizes]\n \n # Construct the layers\n multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)\n \n # Runs the RNN model dynamically\n # more about it at: \n # https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn\n outputs, final_state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,\n inputs=color_name_onehot,\n sequence_length=sequence_length,\n dtype=tf.float32)\n\n # Slice to keep only the last cell of the RNN\n last_activations = rnn_common.select_last_activations(outputs,\n sequence_length)\n\n # ------------ Dense layers -------------------\n # Construct dense layers on top of the last cell of the RNN\n for units in dnn_layer_sizes:\n last_activations = tf.layers.dense(\n last_activations, units, activation=tf.nn.relu)\n \n # Final dense layer for prediction\n predictions = tf.layers.dense(last_activations, label_dimension)\n\n # ----------- Loss and Optimizer ----------------\n loss = None\n train_op = None\n\n if mode != tf.estimator.ModeKeys.PREDICT: \n loss = tf.losses.mean_squared_error(labels, predictions)\n \n if mode == tf.estimator.ModeKeys.TRAIN: \n train_op = tf.contrib.layers.optimize_loss(\n loss,\n tf.contrib.framework.get_global_step(),\n optimizer=optimizer,\n learning_rate=learning_rate)\n \n return model_fn_lib.EstimatorSpec(mode,\n predictions=predictions,\n loss=loss,\n train_op=train_op)\n return model_fn", "Below is the tensorboard comparison of a model using a GRUCell called \"gru\" and a model using LSTMCell called \"simple_batch\"." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
IS-ENES-Data/submission_forms
test/forms/test/test_testsuite_1234.ipynb
apache-2.0
[ "Generic DKRZ national archive form\nThis form is intended to provide a generic template for interactive forms e.g. for testing \n... to be finalized ...", "from dkrz_forms import form_widgets\nform_widgets.show_status('form-submission')\n\nMY_LAST_NAME = \"....\" # e.gl MY_LAST_NAME = \"schulz\" \n#-------------------------------------------------\nfrom dkrz_forms import form_handler, form_widgets\nform_info = form_widgets.check_pwd(MY_LAST_NAME)\nsf = form_handler.init_form(form_info)\nform = sf.sub.entity_out.form_info", "Edit form information", "form.myattribute = \"myinformation\"", "Save your form\nyour form will be stored (the form name consists of your last name plut your keyword)", "form_handler.save_form(sf,\"..my comment..\") # edit my comment info ", "officially submit your form\nthe form will be submitted to the DKRZ team to process\nyou also receive a confirmation email with a reference to your online form for future modifications", "form_handler.email_form_info(sf)\nform_handler.form_submission(sf)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/pt-br/tutorials/quickstart/advanced.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "TensorFlow 2 início rápido para especialistas\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/quickstart/advanced\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">Ver em TensorFlow.org</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/quickstart/advanced.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Executar no Google Colab</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/quickstart/advanced.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">Ver código fontes no GitHub</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/pt-br/tutorials/quickstart/advanced.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">Baixar notebook</a></td>\n</table>\n\nEste é um arquivo de notebook [Google Colaboratory] (https://colab.research.google.com/notebooks/welcome.ipynb). Os programas Python são executados diretamente no navegador - uma ótima maneira de aprender e usar o TensorFlow. Para seguir este tutorial, execute o bloco de anotações no Google Colab clicando no botão na parte superior desta página.\n\nNo Colab, conecte-se a uma instância do Python: No canto superior direito da barra de menus, selecione * CONNECT*.\nExecute todas as células de código do notebook: Selecione * Runtime * > * Run all *.\n\nFaça o download e instale o pacote TensorFlow 2:\nNote: Upgrade pip to install the TensorFlow 2 package. See the install guide for details.\nImporte o TensorFlow dentro de seu programa:", "from __future__ import absolute_import, division, print_function, unicode_literals\n\nimport tensorflow as tf\n\nfrom tensorflow.keras.layers import Dense, Flatten, Conv2D\nfrom tensorflow.keras import Model", "Carregue e prepare o [conjunto de dados MNIST] (http://yann.lecun.com/exdb/mnist/).", "mnist = tf.keras.datasets.mnist\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0\n\n# Adicione uma dimensão de canais\nx_train = x_train[..., tf.newaxis]\nx_test = x_test[..., tf.newaxis]", "Use tf.data para agrupar e embaralhar o conjunto de dados:", "train_ds = tf.data.Dataset.from_tensor_slices(\n (x_train, y_train)).shuffle(10000).batch(32)\n\ntest_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)", "Crie o modelo tf.keras usando a Keras [API de subclasse de modelo] (https://www.tensorflow.org/guide/keras#model_subclassing):", "class MyModel(Model):\n def __init__(self):\n super(MyModel, self).__init__()\n self.conv1 = Conv2D(32, 3, activation='relu')\n self.flatten = Flatten()\n self.d1 = Dense(128, activation='relu')\n self.d2 = Dense(10, activation='softmax')\n\n def call(self, x):\n x = self.conv1(x)\n x = self.flatten(x)\n x = self.d1(x)\n return self.d2(x)\n\n# Crie uma instância do modelo\nmodel = MyModel()", "Escolha uma função otimizadora e de perda para treinamento:", "loss_object = tf.keras.losses.SparseCategoricalCrossentropy()\n\noptimizer = tf.keras.optimizers.Adam()", "Selecione métricas para medir a perda e a precisão do modelo. Essas métricas acumulam os valores ao longo das épocas e, em seguida, imprimem o resultado geral.", "train_loss = tf.keras.metrics.Mean(name='train_loss')\ntrain_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')\n\ntest_loss = tf.keras.metrics.Mean(name='test_loss')\ntest_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')", "Use tf.GradientTape para treinar o modelo:", "@tf.function\ndef train_step(images, labels):\n with tf.GradientTape() as tape:\n # training=True é necessário apenas se houver camadas com diferentes\n    # comportamentos durante o treinamento versus inferência (por exemplo, Dropout).\n predictions = model(images, training=True)\n loss = loss_object(labels, predictions)\n gradients = tape.gradient(loss, model.trainable_variables)\n optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n\n train_loss(loss)\n train_accuracy(labels, predictions)", "Teste o modelo:", "@tf.function\ndef test_step(images, labels):\n # training=True é necessário apenas se houver camadas com diferentes\n  # comportamentos durante o treinamento versus inferência (por exemplo, Dropout).\n predictions = model(images, training=False)\n t_loss = loss_object(labels, predictions)\n\n test_loss(t_loss)\n test_accuracy(labels, predictions)\n\nEPOCHS = 5\n\nfor epoch in range(EPOCHS):\n # Reiniciar as métricas no início da próxima época\n train_loss.reset_states()\n train_accuracy.reset_states()\n test_loss.reset_states()\n test_accuracy.reset_states()\n\n for images, labels in train_ds:\n train_step(images, labels)\n\n for test_images, test_labels in test_ds:\n test_step(test_images, test_labels)\n\n template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'\n print(template.format(epoch+1,\n train_loss.result(),\n train_accuracy.result()*100,\n test_loss.result(),\n test_accuracy.result()*100))", "O classificador de imagem agora é treinado para ~98% de acurácia neste conjunto de dados. Para saber mais, leia os [tutoriais do TensorFlow] (https://www.tensorflow.org/tutorials)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
chusine/dlnd
autoencoder/Simple_Autoencoder_Solution.ipynb
mit
[ "A Simple Autoencoder\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.", "%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)", "Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.", "img = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')", "We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.\n\n\nExercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.", "# Size of the encoding layer (the hidden layer)\nencoding_dim = 32\n\nimage_size = mnist.train.images.shape[1]\n\ninputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')\n\n# Output of hidden layer\nencoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)\n\n# Output layer logits\nlogits = tf.layers.dense(encoded, image_size, activation=None)\n# Sigmoid output from\ndecoded = tf.nn.sigmoid(logits, name='output')\n\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)", "Training", "# Create the session\nsess = tf.Session()", "Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. \nCalling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).", "epochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n feed = {inputs_: batch[0], targets_: batch[0]}\n batch_cost, _ = sess.run([cost, opt], feed_dict=feed)\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))", "Checking out the results\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.", "fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)\n\nsess.close()", "Up Next\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.\nIn practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dsavransky/MAE4060
Notebooks/Spinning Symmetric Rigid Body.ipynb
mit
[ "Dynamics of a Spinning Symmetric Rigid Body", "from miscpy.utils.sympyhelpers import *\ninit_printing()\nth,psi,thd,psidd,thdd,psidd,Omega,I1,I2,t,M1,C = \\\nsymbols('theta,psi,thetadot,psidot,thetaddot,psiddot,Omega,I_1,I_2,t,M_1,C')\ndiffmap = {th:thd,psi:psid,thd:thdd,psid:psidd}", "Spinning Symmetric Rigid Body setup: The body's orientation in inertial frame $\\mathcal I$ is described by a 3-1-2 $(\\psi,\\theta,\\phi)$ rotation: the body is rotated by angle $\\psi$ about $\\mathbf e_3$ (creating intermediate frame $\\mathcal A$), by an angle $\\theta$ about $\\mathbf a_1$ (creating intermediate frame $\\mathcal B$), and finally spinning about $\\mathbf b_2 \\equiv \\mathbf c_2$ at a rate $\\Omega \\equiv \\dot\\phi$, creating body-fixed frame $\\mathcal C$. Note that while the body fixed frame is $\\mathcal C$, all computations here are performed in $\\mathcal B$ frame components.\n${}^\\mathcal{B}C^\\mathcal{A}$:", "bCa = rotMat(1,th);bCa", "$\\left[{}^\\mathcal{I}\\boldsymbol{\\omega}^\\mathcal{B}\\right]_\\mathcal{B}$:", "iWb_B = bCa*Matrix([0,0,psid])+ Matrix([thd,0,0]); iWb_B", "${}^\\mathcal{I}\\boldsymbol{\\omega}^\\mathcal{C} = {}^\\mathcal{I}\\boldsymbol{\\omega}^\\mathcal{B} + {}^\\mathcal{B}\\boldsymbol{\\omega}^\\mathcal{C}$. \n$\\left[{}^\\mathcal{I}\\boldsymbol{\\omega}^\\mathcal{C}\\right]_\\mathcal{B}$:", "iWc_B = iWb_B +Matrix([0,Omega,0]); iWc_B", "$\\left[ \\mathbb I_G \\right]_\\mathcal B$:", "IG_B = diag(I1,I2,I1);IG_B", "$\\left[{}^\\mathcal{I} \\mathbf h_G\\right]_\\mathcal{B}$:", "hG_B = IG_B*iWc_B; hG_B", "$\\vphantom{\\frac{\\mathrm{d}}{\\mathrm{d}t}}^\\mathcal{I}\\frac{\\mathrm{d}}{\\mathrm{d}t} {}^\\mathcal{I} \\mathbf h_G = \\vphantom{\\frac{\\mathrm{d}}{\\mathrm{d}t}}^\\mathcal{B}\\frac{\\mathrm{d}}{\\mathrm{d}t} {}^\\mathcal{I} \\mathbf h_G + {}^\\mathcal{I}\\boldsymbol{\\omega}^\\mathcal{B} \\times \\mathbf h_G$.\n$\\left[\\vphantom{\\frac{\\mathrm{d}}{\\mathrm{d}t}}^\\mathcal{I}\\frac{\\mathrm{d}}{\\mathrm{d}t} {}^\\mathcal{I} \\mathbf h_G\\right]_\\mathcal{B}$:", "dhG_B = difftotalmat(hG_B,t,diffmap) + skew(iWb_B)*hG_B; dhG_B", "Note that the $\\mathbf b_2$ component of ${}^\\mathcal{I}\\boldsymbol{\\omega}^\\mathcal{B} \\times \\mathbf h_G$ is zero:", "skew(iWb_B)*hG_B", "Define $C \\triangleq \\Omega + \\dot\\psi\\sin\\theta$ and substitute into $\\left[\\vphantom{\\frac{\\mathrm{d}}{\\mathrm{d}t}}^\\mathcal{I}\\frac{\\mathrm{d}}{\\mathrm{d}t} {}^\\mathcal{I} \\mathbf h_G\\right]_\\mathcal{B}$:", "dhG_B_simp = dhG_B.subs(Omega+psid*sin(th),C); dhG_B_simp", "Assume an external torque generating moment about $G$ of $\\mathbf M_G = -M_1\\mathbf b_1$:", "solve([dhG_B_simp[0] + M1,dhG_B_simp[2]],[thdd,psidd])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dataewan/deep-learning
gan_mnist/Intro_to_GANs_Solution.ipynb
mit
[ "Generative Adversarial Network\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\nGANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\nPix2Pix \nCycleGAN\nA whole list\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.", "%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')", "Model Inputs\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.", "def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real') \n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z", "Generator network\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\nVariable Scope\nHere we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.\nWe could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\nTo use tf.variable_scope, you use a with statement:\npython\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\nHere's more from the TensorFlow documentation to get another look at using tf.variable_scope.\nLeaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:\n$$\nf(x) = max(\\alpha * x, x)\n$$\nTanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.", "def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n with tf.variable_scope('generator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(z, n_units, activation=None)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n # Logits and tanh output\n logits = tf.layers.dense(h1, out_dim, activation=None)\n out = tf.tanh(logits)\n \n return out", "Discriminator\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.", "def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n with tf.variable_scope('discriminator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(x, n_units, activation=None)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n logits = tf.layers.dense(h1, 1, activation=None)\n out = tf.sigmoid(logits)\n \n return out, logits", "Hyperparameters", "# Size of input image to discriminator\ninput_size = 784\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Smoothing \nsmooth = 0.1", "Build network\nNow we're building the network from the functions defined above.\nFirst is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.\nThen, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).", "tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = model_inputs(input_size, z_size)\n\n# Build the model\ng_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)\n# g_model is the generator output\n\nd_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)\nd_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)", "Discriminator and Generator Losses\nNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like \npython\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\nFor the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)\nThe discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\nFinally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.", "# Calculate losses\nd_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, \n labels=tf.ones_like(d_logits_real) * (1 - smooth)))\nd_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \n labels=tf.zeros_like(d_logits_real)))\nd_loss = d_loss_real + d_loss_fake\n\ng_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,\n labels=tf.ones_like(d_logits_fake)))", "Optimizers\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.\nFor the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). \nWe can do something similar with the discriminator. All the variables in the discriminator start with discriminator.\nThen, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.", "# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\nt_vars = tf.trainable_variables()\ng_vars = [var for var in t_vars if var.name.startswith('generator')]\nd_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n\nd_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)\ng_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)", "Training", "batch_size = 100\nepochs = 100\nsamples = []\nlosses = []\n# Only save generator variables\nsaver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_z: batch_z})\n \n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator.ckpt')\n\n# Save training generator samples\nwith open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)", "Training loss\nHere we'll check out the training losses for the generator and discriminator.", "fig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()", "Generator samples from training\nHere we can view samples of images from the generator. First we'll look at images taken while training.", "def view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n return fig, axes\n\n# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)", "These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.", "_ = view_samples(-1, samples)", "Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!", "rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)", "It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.\nSampling from the generator\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!", "saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\n_ = view_samples(0, [gen_samples])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kit-cel/wt
mloc/ch4_Autoencoders/BinaryAutoencoder_AWGN.ipynb
gpl-2.0
[ "Learn Modulation and Bit-Wise Demodulation in the AWGN Channel with Deep Neural Networks by Autoencoders and End-to-end Training\nThis code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>\nThis code illustrates\n* End-to-end-learning of modulation scheme and demodulator in an AWGN channel with binary autoencoder\nImports", "import torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom ipywidgets import interactive\nimport ipywidgets as widgets\n\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\nprint(\"We are using the following device for learning:\",device)", "Helper function to compute the Bit Error Rate (BER)", "# helper function to compute the bit error rate\ndef BER(predictions, labels):\n return np.mean(1-np.isclose((predictions > 0.5).astype(float), labels))", "Define Parameters\nHere, we consider the simple AWGN channel. We modulate using a constellation with $M = 2^m$ different symbol. To symbol $i$, we assign the binary representation of $i$ as bit pattern.", "# number of bits assigned to symbol\nm = 5\n\n# number of symbols\nM = 2**m\n\n\nEbN0 = 10\n\n# noise standard deviation\nsigma_n = np.sqrt((1/2/np.log2(M)) * 10**(-EbN0/10))", "Here, we define the parameters of the neural network and training, generate the validation set and a helping set to show the decision regions", "# Bit representation of symbols\nbinaries = torch.from_numpy(np.reshape(np.unpackbits(np.uint8(np.arange(0,2**m))), (-1,8))).float().to(device)\nbinaries = binaries[:,(8-m):]\n\n# validation set. Training examples are generated on the fly\nN_valid = 100000\n\n# number of neurons in hidden layers at receiver\nhidden_neurons_RX_1 = 50\nhidden_neurons_RX_2 = 128\nhidden_neurons_RX = [hidden_neurons_RX_1, hidden_neurons_RX_2]\n\n# Generate Validation Data\ny_valid = np.random.randint(M,size=N_valid)\ny_valid_onehot = np.eye(M)[y_valid]\ny_valid_binary = binaries[y_valid,:].detach().cpu().numpy()", "Define the architecture of the autoencoder, i.e. the neural network\nThis is the main neural network/Autoencoder with transmitter, channel and receiver. Transmitter and receiver each with ELU activation function. Note that the final layer does not use a softmax function, as this function is already included in the CrossEntropyLoss.", "class Autoencoder(nn.Module):\n def __init__(self, hidden_neurons_RX):\n super(Autoencoder, self).__init__()\n \n # Define Transmitter Layer: Linear function, M input neurons (symbols), 2 output neurons (real and imaginary part) \n self.fcT = nn.Linear(M, 2) \n \n # Define Receiver Layer: Linear function, 2 input neurons (real and imaginary part), m output neurons (bits)\n self.fcR1 = nn.Linear(2,hidden_neurons_RX[0]) \n self.fcR2 = nn.Linear(hidden_neurons_RX[0], hidden_neurons_RX[1]) \n self.fcR3 = nn.Linear(hidden_neurons_RX[1], m) \n\n # Non-linearity (used in transmitter and receiver)\n self.activation_function = nn.ELU() \n self.sigmoid = nn.Sigmoid()\n\n def forward(self, x):\n # compute output\n encoded = self.network_transmitter(x)\n \n # compute normalization factor and normalize channel output\n norm_factor = torch.sqrt(torch.mean(torch.mul(encoded,encoded)) * 2 ) \n modulated = encoded / norm_factor \n received = self.channel_model(modulated)\n \n bitprob = self.network_receiver(received)\n return bitprob\n \n def network_transmitter(self,batch_labels):\n return self.fcT(batch_labels)\n \n def network_receiver(self,inp):\n out = self.activation_function(self.fcR1(inp))\n out = self.activation_function(self.fcR2(out))\n logits = self.sigmoid(self.fcR3(out)) \n return logits\n \n def channel_model(self,modulated):\n # just add noise, nothing else\n received = torch.add(modulated, sigma_n*torch.randn(len(modulated),2).to(device))\n return received", "Train the NN and evaluate it at the end of each epoch\nHere the idea is to vary the batch size during training. In the first iterations, we start with a small batch size to rapidly get to a working solution. The closer we come towards the end of the training we increase the batch size. If keeping the batch size small, it may happen that there are no misclassifications in a small batch and there is no incentive of the training to improve. A larger batch size will most likely contain errors in the batch and hence there will be incentive to keep on training and improving. \nHere, the data is generated on the fly inside the graph, by using PyTorch random number generation. As PyTorch does not natively support complex numbers (at least in early versions), we decided to replace the complex number operations in the channel by a simple rotation matrix and treating real and imaginary parts separately.\nWe use the ELU activation function inside the neural network and employ the Adam optimization algorithm.\nNow, carry out the training as such. First initialize the variables and then loop through the training. Here, the epochs are not defined in the classical way, as we do not have a training set per se. We generate new data on the fly and never reuse data. We change the batch size in each epoch.<br>\nTo get the constellation symbols and the received data, we apply the model after each epoch.", "model = Autoencoder(hidden_neurons_RX)\nmodel.to(device)\n\n \nloss_fn = nn.BCELoss()\n\n# Adam Optimizer\noptimizer = optim.Adam(model.parameters()) \n\n\n# Training parameters\nnum_epochs = 150\nbatches_per_epoch = np.linspace(1, 1000, num=num_epochs).astype(int)\n\n# Vary batch size during training\nbatch_size_per_epoch = np.linspace(200,5000,num=num_epochs)\nlearning_rate_per_epoch = np.linspace(0.001, 0.00001, num=num_epochs)\n\nvalidation_BERs = np.zeros(num_epochs)\nvalidation_received = []\nconstellations = []\n\nprint('Start Training')\nfor epoch in range(num_epochs):\n \n batch_labels = torch.empty(int(batch_size_per_epoch[epoch]), device=device)\n batch_labels_binary = torch.zeros(int(batch_size_per_epoch[epoch]), m, device=device)\n \n \n for step in range(batches_per_epoch[epoch]):\n # Generate training data: In most cases, you have a dataset and do not generate a training dataset during training loop\n # sample new mini-batch directory on the GPU (if available) \n batch_labels.random_(M)\n batch_labels_onehot = torch.zeros(int(batch_size_per_epoch[epoch]), M, device=device)\n batch_labels_onehot[range(batch_labels_onehot.shape[0]), batch_labels.long()]=1\n\n batch_labels_binary[range(batch_labels_onehot.shape[0]), :] = binaries[batch_labels.long(),:]\n\n \n # Propagate (training) data through the net\n NN_output = model(batch_labels_onehot)\n\n # compute loss\n loss = loss_fn(NN_output, batch_labels_binary)\n\n # compute gradients\n loss.backward()\n \n # Adapt weights\n optimizer.step()\n \n # reset gradients\n optimizer.zero_grad()\n \n optimizer.param_groups[0]['lr'] = learning_rate_per_epoch[epoch]\n \n # compute validation BER\n out_valid = model(torch.Tensor(y_valid_onehot).to(device))\n validation_BERs[epoch] = BER(out_valid.detach().cpu().numpy(), y_valid_binary)\n print('Validation BER after epoch %d: %f (loss %1.8f)' % (epoch, validation_BERs[epoch], loss.detach().cpu().numpy())) \n \n # calculate and store constellation\n encoded = model.network_transmitter(torch.eye(M).to(device))\n norm_factor = torch.sqrt(torch.mean(torch.mul(encoded,encoded)) * 2 ) \n modulated = encoded / norm_factor \n constellations.append(modulated.detach().cpu().numpy())\n \n \nprint('Training finished')", "Evaluate results\nPlt decision region and scatter plot of the validation set. Note that the validation set is only used for computing SERs and plotting, there is no feedback towards the training!", "cmap = matplotlib.cm.tab20\nbase = plt.cm.get_cmap(cmap)\ncolor_list = base.colors\nnew_color_list = [[t/2 + 0.5 for t in color_list[k]] for k in range(len(color_list))]\n\n# find minimum SER from validation set\nmin_BER_iter = np.argmin(validation_BERs)\n\nplt.figure(figsize=(10,8))\nfont = {'size' : 14}\nplt.rc('font', **font)\nplt.rc('text', usetex=True)\n\nbin_labels = [np.binary_repr(j).zfill(m) for j in range(2**m)]\n\n \nplt.scatter(constellations[min_BER_iter][:,0], constellations[min_BER_iter][:,1], c=range(M), cmap='tab20',s=50)\nfor i, txt in enumerate(bin_labels):\n plt.annotate(txt, xy=(constellations[min_BER_iter][i,0], constellations[min_BER_iter][i,1]), xycoords='data', \\\n xytext=(0, 3), textcoords='offset points', \\\n ha='center', va='bottom')\n \nplt.axis('scaled')\nplt.xlabel(r'$\\Re\\{r\\}$',fontsize=16)\nplt.ylabel(r'$\\Im\\{r\\}$',fontsize=16)\nplt.xlim((-1.7, +1.7))\nplt.ylim((-1.7, +1.7))\nplt.grid(which='both')\nplt.title('Constellation with Bit Mapping',fontsize=18)\nplt.savefig('learning_AWGN_BitAE_EbN0%1.1f_M%d.pdf' % (EbN0,M),bbox_inches='tight')", "Generate animation and save as a gif. (Evaluate results III)", "%matplotlib notebook\n%matplotlib notebook\n# Generate animation\nfrom matplotlib import animation, rc\nfrom matplotlib.animation import PillowWriter # Disable if you don't want to save any GIFs.\n\nfont = {'size' : 18}\nplt.rc('font', **font)\n\nfig = plt.figure(figsize=(8,6))\nax1 = fig.add_subplot(1,1,1)\n\nax1.axis('scaled')\n\nwritten = False\ndef animate(i):\n ax1.clear()\n ax1.scatter(constellations[i][:,0], constellations[i][:,1], c=range(M), cmap='tab20',s=50)\n\n for j, txt in enumerate(bin_labels):\n ax1.annotate(txt, xy=(constellations[i][j,0], constellations[i][j,1]), xycoords='data', \\\n xytext=(0, 3), textcoords='offset points', \\\n ha='center', va='bottom', fontsize=12)\n \n \n ax1.set_xlim(( -1.7, +1.7))\n ax1.set_ylim(( -1.7, +1.7))\n ax1.set_title('Constellation', fontsize=18)\n \n ax1.set_xlabel(r'$\\Re\\{r\\}$',fontsize=16)\n ax1.set_ylabel(r'$\\Im\\{r\\}$',fontsize=16)\n\n \nanim = animation.FuncAnimation(fig, animate, frames=min_BER_iter+1, interval=200, blit=False)\nfig.show()\nanim.save('learning_AWGN_BitAE_EbN0%1.1f_M%d.gif' % (EbN0,M), writer=PillowWriter(fps=5))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/structured/labs/6_serving_babyweight.ipynb
apache-2.0
[ "LAB 6: Serving baby weight predictions\nLearning Objectives\n\nDeploy a web application that consumes your model service on Cloud AI Platform.\n\nIntroduction\nVerify that you have previously Trained your Keras model and Deployed it predicting with Keras model on Cloud AI Platform. If not, go back to 5a_train_keras_ai_platform_babyweight.ipynb and 5b_deploy_keras_ai_platform_babyweight.ipynb create them.\nIn the previous notebook, we deployed our model to CAIP. In this notebook, we'll make a Flask app to show how our models can interact with a web application which could be deployed to App Engine with the Flexible Environment.\nStep 1: Review Flask App code in application folder\nLet's start with what our users will see. In the application folder, we have prebuilt the components for web application. In the templates folder, the <a href=\"application/templates/index.html\">index.html</a> file is the visual GUI our users will make predictions with.\nIt works by using an HTML form to make a POST request to our server, passing along the values captured by the input tags.\nThe form will render a little strangely in the notebook since the notebook environment does not run javascript, nor do we have our web server up and running. Let's get to that!\nStep 2: Set environment variables", "%%bash\n# Check your project name\nexport PROJECT=$(gcloud config list project --format \"value(core.project)\")\necho \"Your current GCP Project Name is: \"$PROJECT\n\nimport os\nos.environ[\"BUCKET\"] = \"your-bucket-id-here\" # Recommended: use your project name", "Step 3: Complete application code in application/main.py\nWe can set up our server with python using Flask. Below, we've already built out most of the application for you.\nThe @app.route() decorator defines a function to handle web reqests. Let's say our website is www.example.com. With how our @app.route(\"/\") function is defined, our sever will render our <a href=\"application/templates/index.html\">index.html</a> file when users go to www.example.com/ (which is the default route for a website).\nSo, when a user pings our server with www.example.com/predict, they would use @app.route(\"/predict\", methods=[\"POST\"]) to make a prediction. The data that gets sent over the internet isn't a dictionary, but a string like below:\nname1=value1&amp;name2=value2 where name corresponds to the name on the input tag of our html form, and the value is what the user entered. Thankfully, Flask makes it easy to transform this string into a dictionary with request.form.to_dict(), but we still need to transform the data into a format our model expects. We've done this with the gender2str and the plurality2str utility functions.\nOk! Let's set up a webserver to take in the form inputs, process them into features, and send these features to our model on Cloud AI Platform to generate predictions to serve to back to users.\nFill in the TODO comments in <a href=\"application/main.py\">application/main.py</a>. Give it a go first and review the solutions folder if you get stuck.\nNote: AppEngine test configurations have already been set for you in the file <a href=\"application/app.yaml\">application/app.yaml</a>. Review app.yaml documentation for additional configuration options.\nStep 4: Deploy application\nSo how do we know that it works? We'll have to deploy our website and find out! Notebooks aren't made for website deployment, so we'll move our operation to the Google Cloud Shell.\nBy default, the shell doesn't have Flask installed, so copy over the following command to install it.\npython3 -m pip install --user Flask==0.12.1\nNext, we'll need to copy our web app to the Cloud Shell. We can use Google Cloud Storage as an inbetween.", "%%bash\ngsutil -m rm -r gs://$BUCKET/baby_app\ngsutil -m cp -r application/ gs://$BUCKET/baby_app", "Run the below cell, and copy the output into the Google Cloud Shell", "%%bash\necho rm -r baby_app/\necho mkdir baby_app/\necho gsutil cp -r gs://$BUCKET/baby_app ./\necho python3 baby_app/main.py", "Step 5: Use your website to generate predictions\nTime to play with the website! The cloud shell should now say something like * Running on http://127.0.0.1:8080/ (Press CTRL+C to quit). Click on the http link to go to your shiny new website. Fill out the form and give it a minute or two to process its first prediction. After the first one, the rest of the predictions will be lightning fast.\nDid you get a prediction? If not, the Google Cloud Shell will spit out a stack trace of the error to help narrow it down. If yes, congratulations! Great job on bringing all of your work together for the users.\nLab Summary\nIn this lab, you deployed a simple Flask web form application on App Engine that takes inputs, transforms them into features, and sends them to a model service on Cloud AI Platform to generate and return predictions.\nCopyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
manipopopo/tensorflow
tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Custom layers\n<table class=\"tfo-notebook-buttons\" align=\"left\"><td>\n<a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n</td><td>\n<a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a></td></table>\n\nWe recommend using tf.keras as a high-level API for building neural networks. That said, most TensorFlow APIs are usable with eager execution.", "import tensorflow as tf\ntfe = tf.contrib.eager\n\ntf.enable_eager_execution()", "Layers: common sets of useful operations\nMost of the time when writing code for machine learning models you want to operate at a higher level of abstraction than individual operations and manipulation of individual variables.\nMany machine learning models are expressible as the composition and stacking of relatively simple layers, and TensorFlow provides both a set of many common layers as a well as easy ways for you to write your own application-specific layers either from scratch or as the composition of existing layers.\nTensorFlow includes the full Keras API in the tf.keras package, and the Keras layers are very useful when building your own models.", "# In the tf.keras.layers package, layers are objects. To construct a layer,\n# simply construct the object. Most layers take as a first argument the number\n# of output dimensions / channels.\nlayer = tf.keras.layers.Dense(100)\n# The number of input dimensions is often unnecessary, as it can be inferred\n# the first time the layer is used, but it can be provided if you want to \n# specify it manually, which is useful in some complex models.\nlayer = tf.keras.layers.Dense(10, input_shape=(None, 5))", "The full list of pre-existing layers can be seen in the documentation. It includes Dense (a fully-connected layer),\nConv2D, LSTM, BatchNormalization, Dropout, and many others.", "# To use a layer, simply call it.\nlayer(tf.zeros([10, 5]))\n\n# Layers have many useful methods. For example, you can inspect all variables\n# in a layer by calling layer.variables. In this case a fully-connected layer\n# will have variables for weights and biases.\nlayer.variables\n\n# The variables are also accessible through nice accessors\nlayer.kernel, layer.bias", "Implementing custom layers\nThe best way to implement your own layer is extending the tf.keras.Layer class and implementing:\n * __init__ , where you can do all input-independent initialization\n * build, where you know the shapes of the input tensors and can do the rest of the initialization\n * call, where you do the forward computation\nNote that you don't have to wait until build is called to create your variables, you can also create them in __init__. However, the advantage of creating them in build is that it enables late variable creation based on the shape of the inputs the layer will operate on. On the other hand, creating variables in __init__ would mean that shapes required to create the variables will need to be explicitly specified.", "class MyDenseLayer(tf.keras.layers.Layer):\n def __init__(self, num_outputs):\n super(MyDenseLayer, self).__init__()\n self.num_outputs = num_outputs\n \n def build(self, input_shape):\n self.kernel = self.add_variable(\"kernel\", \n shape=[input_shape[-1].value, \n self.num_outputs])\n \n def call(self, input):\n return tf.matmul(input, self.kernel)\n \nlayer = MyDenseLayer(10)\nprint(layer(tf.zeros([10, 5])))\nprint(layer.variables)", "Note that you don't have to wait until build is called to create your variables, you can also create them in __init__.\nOverall code is easier to read and maintain if it uses standard layers whenever possible, as other readers will be familiar with the behavior of standard layers. If you want to use a layer which is not present in tf.keras.layers or tf.contrib.layers, consider filing a github issue or, even better, sending us a pull request!\nModels: composing layers\nMany interesting layer-like things in machine learning models are implemented by composing existing layers. For example, each residual block in a resnet is a composition of convolutions, batch normalizations, and a shortcut.\nThe main class used when creating a layer-like thing which contains other layers is tf.keras.Model. Implementing one is done by inheriting from tf.keras.Model.", "class ResnetIdentityBlock(tf.keras.Model):\n def __init__(self, kernel_size, filters):\n super(ResnetIdentityBlock, self).__init__(name='')\n filters1, filters2, filters3 = filters\n\n self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1))\n self.bn2a = tf.keras.layers.BatchNormalization()\n\n self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')\n self.bn2b = tf.keras.layers.BatchNormalization()\n\n self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1))\n self.bn2c = tf.keras.layers.BatchNormalization()\n\n def call(self, input_tensor, training=False):\n x = self.conv2a(input_tensor)\n x = self.bn2a(x, training=training)\n x = tf.nn.relu(x)\n\n x = self.conv2b(x)\n x = self.bn2b(x, training=training)\n x = tf.nn.relu(x)\n\n x = self.conv2c(x)\n x = self.bn2c(x, training=training)\n\n x += input_tensor\n return tf.nn.relu(x)\n\n \nblock = ResnetIdentityBlock(1, [1, 2, 3])\nprint(block(tf.zeros([1, 2, 3, 3])))\nprint([x.name for x in block.variables])", "Much of the time, however, models which compose many layers simply call one layer after the other. This can be done in very little code using tf.keras.Sequential", " my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1)),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.Conv2D(2, 1, \n padding='same'),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.Conv2D(3, (1, 1)),\n tf.keras.layers.BatchNormalization()])\nmy_seq(tf.zeros([1, 2, 3, 3]))", "Next steps\nNow you can go back to the previous notebook and adapt the linear regression example to use layers and models to be better structured." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
data-cube/agdc-v2-examples
notebooks/07_hovmoller_space_time_visualisation.ipynb
apache-2.0
[ "Visualising variation in space and time (Hovmoller plot)\nThis notebook describes how to generate a space-time (Hovmoller plot) visualisation of NDVI, the example shown here is for the Mitchell River in Queensland. The river channel migrates, and a Hovmoller plot generated from a transect that crosses the river shows the channel migration and associated vegetation changes.", "%pylab notebook\nfrom __future__ import print_function\nimport datacube\nimport xarray as xr\nfrom datacube.storage import masking\nfrom datacube.storage.masking import mask_to_dict\nfrom matplotlib import pyplot as plt\nfrom IPython.display import display\nimport ipywidgets as widgets\n\ndc = datacube.Datacube(app='linear extraction for Hovmoller plot')\n\n#### DEFINE SPATIOTEMPORAL RANGE AND BANDS OF INTEREST\n#Use this to manually define an upper left/lower right coords\n\n\n#Define temporal range\nstart_of_epoch = '1998-01-01'\nend_of_epoch = '2016-12-31'\n\n#Define wavelengths/bands of interest, remove this kwarg to retrieve all bands\nbands_of_interest = [#'blue',\n 'green',\n 'red', \n 'nir',\n 'swir1', \n #'swir2'\n ]\n\n#Define sensors of interest\nsensors = ['ls8', 'ls7', 'ls5'] \n\nquery = {'time': (start_of_epoch, end_of_epoch)}\nlat_max = -15.94\nlat_min = -15.98\nlon_max = 142.49522\nlon_min = 142.4485\n\nquery['x'] = (lon_min, lon_max)\nquery['y'] = (lat_max, lat_min)\nquery['crs'] = 'EPSG:4326'\n\nprint(query)", "retrieve the NBAR and PQ for the spatiotemporal range of interest", "#Define which pixel quality artefacts you want removed from the results\nmask_components = {'cloud_acca':'no_cloud',\n'cloud_shadow_acca' :'no_cloud_shadow',\n'cloud_shadow_fmask' : 'no_cloud_shadow',\n'cloud_fmask' :'no_cloud',\n'blue_saturated' : False,\n'green_saturated' : False,\n'red_saturated' : False,\n'nir_saturated' : False,\n'swir1_saturated' : False,\n'swir2_saturated' : False,\n'contiguous':True}\n\n#Retrieve the NBAR and PQ data for sensor n\nsensor_clean = {}\nfor sensor in sensors:\n #Load the NBAR and corresponding PQ\n sensor_nbar = dc.load(product= sensor+'_nbar_albers', group_by='solar_day', measurements = bands_of_interest, **query)\n sensor_pq = dc.load(product= sensor+'_pq_albers', group_by='solar_day', **query)\n #grab the projection info before masking/sorting\n crs = sensor_nbar.crs\n crswkt = sensor_nbar.crs.wkt\n affine = sensor_nbar.affine\n #This line is to make sure there's PQ to go with the NBAR\n sensor_nbar = sensor_nbar.sel(time = sensor_pq.time)\n #Apply the PQ masks to the NBAR\n cloud_free = masking.make_mask(sensor_pq, **mask_components)\n good_data = cloud_free.pixelquality.loc[start_of_epoch:end_of_epoch]\n sensor_nbar = sensor_nbar.where(good_data)\n sensor_clean[sensor] = sensor_nbar\n\n#Conctanate measurements from the different sensors together\nnbar_clean = xr.concat(sensor_clean.values(), dim='time')\ntime_sorted = nbar_clean.time.argsort()\nnbar_clean = nbar_clean.isel(time=time_sorted)\nnbar_clean.attrs['crs'] = crs\nnbar_clean.attrs['affine'] = affine\n#calculate the normalised difference vegetation index (NDVI)\nall_ndvi_sorted = ((nbar_clean.nir - nbar_clean.red)/(nbar_clean.nir + nbar_clean.red))\n\nprint('The number of time slices at this location is '+ str(nbar_clean.red.shape[0]))", "Plotting an image, select a location for extracting the hovmoller plot\nThe interactive widget allows you to select a location (x, y coordinates), the plot will then show all of the time series that fall into the same x coordinate.", "#select time slice of interest - this is trial and error until you get a decent image\ntime_slice_i = 481\nrgb = nbar_clean.isel(time =time_slice_i).to_array(dim='color').sel(color=['swir1', 'nir', 'green']).transpose('y', 'x', 'color')\n#rgb = nbar_clean.isel(time =time_slice).to_array(dim='color').sel(color=['swir1', 'nir', 'green']).transpose('y', 'x', 'color')\nfake_saturation = 4500\nclipped_visible = rgb.where(rgb<fake_saturation).fillna(fake_saturation)\nmax_val = clipped_visible.max(['y', 'x'])\nscaled = (clipped_visible / max_val)\n\n#Click on this image to chose the location for time series extraction\nw = widgets.HTML(\"Event information appears here when you click on the figure\")\ndef callback(event):\n global x, y\n x, y = int(event.xdata + 0.5), int(event.ydata + 0.5)\n w.value = 'X: {}, Y: {}'.format(x,y)\n\nfig = plt.figure(figsize =(12,6))\nplt.imshow(scaled, interpolation = 'nearest',\n extent=[scaled.coords['x'].min(), scaled.coords['x'].max(), \n scaled.coords['y'].min(), scaled.coords['y'].max()])\n\nfig.canvas.mpl_connect('button_press_event', callback)\ndate_ = nbar_clean.time[time_slice_i]\nplt.title(date_.astype('datetime64[D]'))\nplt.show()\ndisplay(w)\n\n#this converts the map x coordinate into image x coordinates\nimage_coords = ~affine * (x, y)\nimagex = int(image_coords[0])\nimagey = int(image_coords[1])\n\n\n#This sets up the NDVI colour ramp and corresponding thresholds\nndvi_cmap = mpl.colors.ListedColormap(['blue', '#ffcc66','#ffffcc' , '#ccff66' , '#2eb82e', '#009933' , '#006600'])\nndvi_bounds = [-1, 0, 0.1, 0.25, 0.35, 0.5, 0.8, 1]\nndvi_norm = mpl.colors.BoundaryNorm(ndvi_bounds, ndvi_cmap.N)\n\n#This cell shows the x transect that you've chosen in the context of an NDVI image with a suitable colour ramp\nfig = plt.figure(figsize=(11.69,4))\nplt.plot([0, all_ndvi_sorted.shape[2]], [imagey,imagey], 'r')\nplt.imshow(all_ndvi_sorted.isel(time = time_slice_i), cmap = ndvi_cmap, norm = ndvi_norm)\n\n\n#Hovmoller plot for the x transect\nfig = plt.figure(figsize=(11.69,7))\nall_ndvi_sorted.isel(#x=[xdim],\n y=[imagey]\n ).plot(norm= ndvi_norm, cmap = ndvi_cmap, yincrease = False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
wgong/open_source_learning
learn_stem/fun_with_mypets.ipynb
apache-2.0
[ "from IPython.display import HTML\nHTML(\"\"\"\n<br><br>\n<a href=http://wwwgong.pythonanywhere.com/cuspea/default/list_talks target=new>\n<font size=+3 color=blue>CUSPEA Talks</font>\n</a>\n<br><br>\n<img src=../images/open-source-learning.jpg><br> \n\"\"\")", "Fun with MyPETS\nTable of Contents\n\nMotivation\nIntroduction\nProblem Statement\n\nImport packages\n\n\nHistory of Open Source Movement\n\n\nHow to learn STEM (or MyPETS)\n\n\nReferences\n\nContributors\nAppendix\n\nMotivation <a class=\"anchor\" id=\"hid_why\"></a>\n\nCurrent Choice\n\n<img src=http://www.cctechlimited.com/pics/office1.jpg>\n\nA New Option\n\n\nThe Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.\n\nUseful for many tasks\n\nProgramming\nBlogging\nLearning\nResearch\nDocumenting work\nCollaborating\nCommunicating\nPublishing results\n\nor even\n\nDoing homework as a student", "HTML(\"<img src=../images/office-suite.jpg>\")", "Introduction <a class=\"anchor\" id=\"hid_intro\"></a>\nProblem Statement <a class=\"anchor\" id=\"hid_problem\"></a>\nImport packages <a class=\"anchor\" id=\"hid_pkg\"></a>", "# math function\nimport math\n\n# create np array\nimport numpy as np\n\n# pandas for data analysis\nimport pandas as pd\n\n# plotting\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# symbolic math\nimport sympy as sy\n\n# html5\nfrom IPython.display import HTML, SVG, YouTubeVideo\n\n# widgets\nfrom collections import OrderedDict\nfrom IPython.display import display, clear_output\nfrom ipywidgets import Dropdown\n\n# csv file\nimport csv", "History of Open Source Movement <a class=\"anchor\" id=\"hid_open_src\"></a>", "with open('../dataset/open_src_move_v2_1.csv') as csvfile:\n reader = csv.DictReader(csvfile)\n table_str = '<table>'\n table_row = \"\"\"\n <tr><td>{year}</td> \n <td><img src={picture}></td> \n <td><table>\n <tr><td>{person}</td></tr>\n <tr><td><a target=new href={subject_url}>{subject}</a></td></tr>\n <tr><td>{history}</td></tr>\n </table>\n </td> \n </tr>\n \"\"\"\n for row in reader:\n table_str = table_str + table_row.format(year=row['Year'], \\\n subject=row['Subject'],\\\n subject_url=row['SubjectURL'],\\\n person=row['Person'],\\\n picture=row['Picture'],\\\n history=row['History'])\n table_str = table_str + '</table>'\n \nHTML(table_str)", "How to learn STEM <a class=\"anchor\" id=\"hid_stem\"></a>", "HTML(\"Wen calls it -<br><br><br> <font color=red size=+4>M</font><font color=purple>y</font><font color=blue size=+3>P</font><font color=blue size=+4>E</font><font color=green size=+4>T</font><font color=magenta size=+3>S</font><br>\")", "Math <a class=\"anchor\" id=\"hid_math\"></a>\n\nAwesome Math\n\n$$ e^{i \\pi} + 1 = 0 $$\nsee more MathJax equations here\nScience <a class=\"anchor\" id=\"hid_science\"></a>\nPhysics <a class=\"anchor\" id=\"hid_physics\"></a>\n\nComputational Physics, 3rd Ed - Problem Solving with Python by Rubin Landau\n\nEngineering <a class=\"anchor\" id=\"hid_engineer\"></a>\n\nHow To Be A Programmer\n\nTechnology <a class=\"anchor\" id=\"hid_tech\"></a>\n\nDeep Learning for Self-Driving Cars @MIT\nDeep Learning for Natural Language Processing @Stanford\n\nReferences <a class=\"anchor\" id=\"hid_ref\"></a>\nWebsites\n\n\nDataCamp - Jupyter Notebook Tutorial\n\n\nhttp://docs.python.org\n\n\nIt goes without saying that Python’s own online documentation is an excellent resource if you need to delve into the finer details of the language and modules. Just make sure you’re looking at the documentation for Python 3 and not earlier versions.\nBooks\nOther Resources\n\nIdea\nGoogle Search\n\n\nText\nWikipedia\n\n\nImage\nGoogle Images\n\n\nVideo\nYouTube\n\n\n\nContributors <a class=\"anchor\" id=\"hid_author\"></a>\n\nwen.gong@oracle.com (first created on 2017-03-09)\n\nAppendix <a class=\"anchor\" id=\"hid_apend\"></a>" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
miaecle/deepchem
examples/tutorials/05_Putting_Multitask_Learning_to_Work.ipynb
mit
[ "Tutorial Part 5: Putting Multitask Learning to Work\nThis notebook walks through the creation of multitask models on MUV [1]. The goal is to demonstrate that multitask methods outperform singletask methods on MUV.\nColab\nThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.\n\nSetup\nTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment.", "%tensorflow_version 1.x\n!curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py\nimport deepchem_installer\n%time deepchem_installer.install(version='2.3.0')", "The MUV dataset is a challenging benchmark in molecular design that consists of 17 different \"targets\" where there are only a few \"active\" compounds per target. The goal of working with this dataset is to make a machine learnign model which achieves high accuracy on held-out compounds at predicting activity. To get started, let's download the MUV dataset for us to play with.", "import os\nimport deepchem as dc\n\ncurrent_dir = os.path.dirname(os.path.realpath(\"__file__\"))\ndataset_file = \"medium_muv.csv.gz\"\nfull_dataset_file = \"muv.csv.gz\"\n\n# We use a small version of MUV to make online rendering of notebooks easy. Replace with full_dataset_file\n# In order to run the full version of this notebook\ndc.utils.download_url(\"https://s3-us-west-1.amazonaws.com/deepchem.io/datasets/%s\" % dataset_file,\n current_dir)\n\ndataset = dc.utils.save.load_from_disk(dataset_file)\nprint(\"Columns of dataset: %s\" % str(dataset.columns.values))\nprint(\"Number of examples in dataset: %s\" % str(dataset.shape[0]))", "Now, let's visualize some compounds from our dataset", "from rdkit import Chem\nfrom rdkit.Chem import Draw\nfrom itertools import islice\nfrom IPython.display import Image, display, HTML\n\ndef display_images(filenames):\n \"\"\"Helper to pretty-print images.\"\"\"\n for filename in filenames:\n display(Image(filename))\n\ndef mols_to_pngs(mols, basename=\"test\"):\n \"\"\"Helper to write RDKit mols to png files.\"\"\"\n filenames = []\n for i, mol in enumerate(mols):\n filename = \"MUV_%s%d.png\" % (basename, i)\n Draw.MolToFile(mol, filename)\n filenames.append(filename)\n return filenames\n\nnum_to_display = 12\nmolecules = []\nfor _, data in islice(dataset.iterrows(), num_to_display):\n molecules.append(Chem.MolFromSmiles(data[\"smiles\"]))\ndisplay_images(mols_to_pngs(molecules))", "There are 17 datasets total in MUV as we mentioned previously. We're going to train a multitask model that attempts to build a joint model to predict activity across all 17 datasets simultaneously. There's some evidence [2] that multitask training creates more robust models. \nAs fair warning, from my experience, this effect can be quite fragile. Nonetheless, it's a tool worth trying given how easy DeepChem makes it to build these models. To get started towards building our actual model, let's first featurize our data.", "MUV_tasks = ['MUV-692', 'MUV-689', 'MUV-846', 'MUV-859', 'MUV-644',\n 'MUV-548', 'MUV-852', 'MUV-600', 'MUV-810', 'MUV-712',\n 'MUV-737', 'MUV-858', 'MUV-713', 'MUV-733', 'MUV-652',\n 'MUV-466', 'MUV-832']\n\nfeaturizer = dc.feat.CircularFingerprint(size=1024)\nloader = dc.data.CSVLoader(\n tasks=MUV_tasks, smiles_field=\"smiles\",\n featurizer=featurizer)\ndataset = loader.featurize(dataset_file)", "We'll now want to split our dataset into training, validation, and test sets. We're going to do a simple random split using dc.splits.RandomSplitter. It's worth noting that this will provide overestimates of real generalizability! For better real world estimates of prospective performance, you'll want to use a harder splitter.", "splitter = dc.splits.RandomSplitter(dataset_file)\ntrain_dataset, valid_dataset, test_dataset = splitter.train_valid_test_split(\n dataset)\n#NOTE THE RENAMING:\nvalid_dataset, test_dataset = test_dataset, valid_dataset", "Let's now get started building some models! We'll do some simple hyperparameter searching to build a robust model.", "import numpy as np\nimport numpy.random\n\nparams_dict = {\"activation\": [\"relu\"],\n \"momentum\": [.9],\n \"batch_size\": [50],\n \"init\": [\"glorot_uniform\"],\n \"data_shape\": [train_dataset.get_data_shape()],\n \"learning_rate\": [1e-3],\n \"decay\": [1e-6],\n \"nb_epoch\": [1],\n \"nesterov\": [False],\n \"dropouts\": [(.5,)],\n \"nb_layers\": [1],\n \"batchnorm\": [False],\n \"layer_sizes\": [(1000,)],\n \"weight_init_stddevs\": [(.1,)],\n \"bias_init_consts\": [(1.,)],\n \"penalty\": [0.], \n } \n\n\nn_features = train_dataset.get_data_shape()[0]\ndef model_builder(model_params, model_dir):\n model = dc.models.MultitaskClassifier(\n len(MUV_tasks), n_features, **model_params)\n return model\n\nmetric = dc.metrics.Metric(dc.metrics.roc_auc_score, np.mean)\noptimizer = dc.hyper.HyperparamOpt(model_builder)\nbest_dnn, best_hyperparams, all_results = optimizer.hyperparam_search(\n params_dict, train_dataset, valid_dataset, [], metric)", "Congratulations! Time to join the Community!\nCongratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:\nStar DeepChem on GitHub\nThis helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.\nJoin the DeepChem Gitter\nThe DeepChem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!\nBibliography\n[1] https://pubs.acs.org/doi/10.1021/ci8002649\n[2] https://pubs.acs.org/doi/abs/10.1021/acs.jcim.7b00146" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phobson/statsmodels
examples/notebooks/tsa_filters.ipynb
bsd-3-clause
[ "Time Series Filters", "%matplotlib inline\n\nfrom __future__ import print_function\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport statsmodels.api as sm\n\ndta = sm.datasets.macrodata.load_pandas().data\n\nindex = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))\nprint(index)\n\ndta.index = index\ndel dta['year']\ndel dta['quarter']\n\nprint(sm.datasets.macrodata.NOTE)\n\nprint(dta.head(10))\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\ndta.realgdp.plot(ax=ax);\nlegend = ax.legend(loc = 'upper left');\nlegend.prop.set_size(20);", "Hodrick-Prescott Filter\nThe Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\\tau_t$ and a cyclical component $\\zeta_t$ \n$$y_t = \\tau_t + \\zeta_t$$\nThe components are determined by minimizing the following quadratic loss function\n$$\\min_{\\{ \\tau_{t}\\} }\\sum_{t}^{T}\\zeta_{t}^{2}+\\lambda\\sum_{t=1}^{T}\\left[\\left(\\tau_{t}-\\tau_{t-1}\\right)-\\left(\\tau_{t-1}-\\tau_{t-2}\\right)\\right]^{2}$$", "gdp_cycle, gdp_trend = sm.tsa.filters.hpfilter(dta.realgdp)\n\ngdp_decomp = dta[['realgdp']]\ngdp_decomp[\"cycle\"] = gdp_cycle\ngdp_decomp[\"trend\"] = gdp_trend\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\ngdp_decomp[[\"realgdp\", \"trend\"]][\"2000-03-31\":].plot(ax=ax, fontsize=16);\nlegend = ax.get_legend()\nlegend.prop.set_size(20);", "Baxter-King approximate band-pass filter: Inflation and Unemployment\nExplore the hypothesis that inflation and unemployment are counter-cyclical.\nThe Baxter-King filter is intended to explictly deal with the periodicty of the business cycle. By applying their band-pass filter to a series, they produce a new series that does not contain fluctuations at higher or lower than those of the business cycle. Specifically, the BK filter takes the form of a symmetric moving average \n$$y_{t}^{*}=\\sum_{k=-K}^{k=K}a_ky_{t-k}$$\nwhere $a_{-k}=a_k$ and $\\sum_{k=-k}^{K}a_k=0$ to eliminate any trend in the series and render it stationary if the series is I(1) or I(2).\nFor completeness, the filter weights are determined as follows\n$$a_{j} = B_{j}+\\theta\\text{ for }j=0,\\pm1,\\pm2,\\dots,\\pm K$$\n$$B_{0} = \\frac{\\left(\\omega_{2}-\\omega_{1}\\right)}{\\pi}$$\n$$B_{j} = \\frac{1}{\\pi j}\\left(\\sin\\left(\\omega_{2}j\\right)-\\sin\\left(\\omega_{1}j\\right)\\right)\\text{ for }j=0,\\pm1,\\pm2,\\dots,\\pm K$$\nwhere $\\theta$ is a normalizing constant such that the weights sum to zero.\n$$\\theta=\\frac{-\\sum_{j=-K^{K}b_{j}}}{2K+1}$$\n$$\\omega_{1}=\\frac{2\\pi}{P_{H}}$$\n$$\\omega_{2}=\\frac{2\\pi}{P_{L}}$$\n$P_L$ and $P_H$ are the periodicity of the low and high cut-off frequencies. Following Burns and Mitchell's work on US business cycles which suggests cycles last from 1.5 to 8 years, we use $P_L=6$ and $P_H=32$ by default.", "bk_cycles = sm.tsa.filters.bkfilter(dta[[\"infl\",\"unemp\"]])", "We lose K observations on both ends. It is suggested to use K=12 for quarterly data.", "fig = plt.figure(figsize=(12,10))\nax = fig.add_subplot(111)\nbk_cycles.plot(ax=ax, style=['r--', 'b-']);", "Christiano-Fitzgerald approximate band-pass filter: Inflation and Unemployment\nThe Christiano-Fitzgerald filter is a generalization of BK and can thus also be seen as weighted moving average. However, the CF filter is asymmetric about $t$ as well as using the entire series. The implementation of their filter involves the\ncalculations of the weights in\n$$y_{t}^{*}=B_{0}y_{t}+B_{1}y_{t+1}+\\dots+B_{T-1-t}y_{T-1}+\\tilde B_{T-t}y_{T}+B_{1}y_{t-1}+\\dots+B_{t-2}y_{2}+\\tilde B_{t-1}y_{1}$$\nfor $t=3,4,...,T-2$, where\n$$B_{j} = \\frac{\\sin(jb)-\\sin(ja)}{\\pi j},j\\geq1$$\n$$B_{0} = \\frac{b-a}{\\pi},a=\\frac{2\\pi}{P_{u}},b=\\frac{2\\pi}{P_{L}}$$\n$\\tilde B_{T-t}$ and $\\tilde B_{t-1}$ are linear functions of the $B_{j}$'s, and the values for $t=1,2,T-1,$ and $T$ are also calculated in much the same way. $P_{U}$ and $P_{L}$ are as described above with the same interpretation.\nThe CF filter is appropriate for series that may follow a random walk.", "print(sm.tsa.stattools.adfuller(dta['unemp'])[:3])\n\nprint(sm.tsa.stattools.adfuller(dta['infl'])[:3])\n\ncf_cycles, cf_trend = sm.tsa.filters.cffilter(dta[[\"infl\",\"unemp\"]])\nprint(cf_cycles.head(10))\n\nfig = plt.figure(figsize=(14,10))\nax = fig.add_subplot(111)\ncf_cycles.plot(ax=ax, style=['r--','b-']);", "Filtering assumes a priori that business cycles exist. Due to this assumption, many macroeconomic models seek to create models that match the shape of impulse response functions rather than replicating properties of filtered series. See VAR notebook." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gastonstat/stat259
tutorials/genotypes.ipynb
mit
[ "Python Basics\nThis notebook will allow you to practice some basic skills for using python: working with different data types, using various data structures, reading and writing text files, using conditionals, control flow structures, creating functions, and of course working with ipython notebooks.\nMotivation:\nA biologist is interested in the genetic basis of height. She measures the heights of many subjects and sends off their DNA samples to a core for genotyping arrays. These arrays determine the DNA bases at the variable sites of the genome (known as single nucleotide polymorphisms, or SNPs). Since humans are diploid, i.e. have two of each chromosome, each data point will be two DNA bases corresponding to the two chromosomes in each individual. At each SNP, there will be only three possible genotypes, e.g. AA, AG, GG for an A/G SNP. In order to test the correlation between a SNP genotype and height, she wants to perform a regression with an additive genetic model. However, she cannot do this with the data in the current form. She needs to convert the genotypes, e.g. AA, AG, and GG, to the numbers 0, 1, and 2, respectively (in the example the number corresponds the number of G bases the person has at that SNP). Since she has too much data to do this manually, e.g. in Excel, she comes to you for ideas of how to efficiently transform the data.\nPart 1:\nCreate a new list which has the converted genotype for each subject ('AA' -> 0, 'AG' -> 1, 'GG' -> 2).", "genos = ['AA', 'GG', 'AG', 'AG', 'GG']\ngenos_new = []\n# Use your knowledge of if/else statements and loop structures below.", "Check your work", "genos_new == [0, 2, 1, 1, 2]", "Part 2:\nSometimes there are errors and the genotype cannot be determined. Adapt your code from above to deal with this problem (in this example missing data is assigned NA for \"Not Available\").", "genos_w_missing = ['AA', 'NA', 'GG', 'AG', 'AG', 'GG', 'NA']\ngenos_w_missing_new = []\n# The missing data should not be converted to a number, but remain 'NA' in the new list", "Check your work", "genos_w_missing_new == [0, 'NA', 2, 1, 1, 2, 'NA']", "Main Practice\nSetup: Open a terminal and run the following commands:\n```bash\ncreate a new directory\nmkdir python-intro\ncd python-intro\ndownload data file, and ipython notebook\ncurl -O https://raw.githubusercontent.com/gastonstat/stat259/gh-pages/tutorials/genos.txt\ncurl -O https://raw.githubusercontent.com/gastonstat/stat259/gh-pages/tutorials/genotypes.ipynb\n```\nData File: The raw data for this practice is in the file genos.txt which contains one column of genotypes (one genotype per row). Each genotype consists of two characters: e.g. 'AA' or 'GG'. In addition, there are some rows that contain missing values denoted as 'NA'.\nI. Read in the data and store the contents in a list called genos.\nII. Find out how what are the different (i.e. unique) values are in genos.\nIII. Calculate the number of occurrences of each genotype, and store the results in a dictionary called geno_counts. Use the following 3 approaches:\n1. Use a for loop to count the genotypes (store the result in a dictionary)\n2. Get the same counts but this time using the count() method\n3. Another alternative is to use Counter from Collections\nIV. Once you've counted the genotypes, make a function get_proportions() that takes geno_counts and returns a dictionary with relative frequencies (i.e. proportions) of genotypes. Also, test your function with the provided assertion.\nV. Convert the string values in genos into integers ('NA' remains as 'NA') and put them in a new list called numeric_genos:\n- 'AA' = 0\n- 'AG' = 1\n- 'GG' = 2\n- 'NA' = 'NA'\nVI. Write the data in numeric_genos to a text file called genos_int.txt\nVII. Finally, convert your notebook to html (and open it) by running these commands from the shell:\nshell\nipython nbconvert genotypes.ipynb\nopen genotypes.html", "# things to be imported\nfrom __future__ import division # if you use python 2.?\nfrom collections import Counter", "I. Reading a text file\nSome refs about Reading Files:\n\nFile Operations: https://github.com/dlab-berkeley/python-fundamentals/blob/master/cheat-sheets/12-Files.ipynb\nReading Text Files: http://www.jarrodmillman.com/rcsds/lectures/reading_text_files.html", "# open 'genos.txt' and store values in \"genos\"\n# YOUR CODE", "II. Unique Genotypes", "# Find the unique values in genos\n# YOUR CODE", "III. Counting Genotypes\na) Using a for loop", "# For loop to count occurrences of AA, AG, GG, NA\n# (store results in dictionary \"geno_counts\")\n# YOUR CODE", "b) Using count method", "# YOUR CODE", "c) Using Counter from collections", "# YOUR CODE", "IV. Function to Calculate Proportions", "# Write a function \"get_proportions()\"\n# Parameters: geno_counts (dictionary)\n# Returns: dictionary of proportions\n# YOUR CODE\n\n# apply your function:\n# get_proportions(geno_counts)\n\n# test for function get_proportions()\ndef test_get_proportions():\n # We make a fake dictionary\n input_val = {'AA': 2, 'AB': 4, 'BB': 14}\n expected_result = {'AA': 0.1, 'AB': 0.2, 'BB': 0.7}\n # run function\n res = get_proportions(input_val)\n assert res == expected_result\n\n# run the test and see what happens:\n# test_get_proportions()", "V. Converting to numeric genotypes", "# convert genotypes: AA = 0, AG = 1, GG = 2, NA = NA\n# (create a list called \"numeric_genos\")\n# YOUR CODE", "VI. Write Numeric Genotypes to a text file", "# write values in \"numeric_genos\" to a file \"genos_int.txt\"\n# YOUR CODE" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
poldrack/fmri-analysis-vm
analysis/efficiency/DesignEfficiency.ipynb
mit
[ "This notebook covers the concepts underlying design efficiency.\nIn order to examine the factors that affect efficiency, we need to be able to generate experimental designs that vary in their timing and correlation between regressors. Let's first create a function that can generate such designs for us.", "import os\nimport numpy\n%matplotlib inline\nimport sys\nsys.path.insert(0,'../utils')\nfrom mkdesign import create_design_singlecondition\nimport matplotlib.pyplot as plt\n#from spm_hrf import spm_hrf\nfrom nipy.modalities.fmri.hemodynamic_models import spm_hrf,compute_regressor\n\ntr=1.0\n\n# the \"blockiness\" argument controls how block-y the design is\n# from 1( pure block) to 0 (pure random)\nd,design=create_design_singlecondition(blockiness=0.95)\nregressor,_=compute_regressor(design,\n 'spm',numpy.arange(0,len(d)))\nplt.axis([0,400,-0.2,1.2])\nplt.plot(d)\nplt.plot(regressor,color='red')", "Now that we have our design, let's generate some synthetic data. We will generate AR1 noise to add to the data; this is not a perfect model of the autocorrelation in fMRI, but it's at least a start towards realistic noise.", "from statsmodels.tsa.arima_process import arma_generate_sample\n\nar1_noise=arma_generate_sample([1,0.3],[1,0.],len(regressor))\nbeta=4\ny=regressor.T*beta + ar1_noise\nprint y.shape\n\nplt.plot(y.T)", "Now let's fit the general linear model to these data. We will ignore serial autocorrelation for now.", "X=numpy.vstack((regressor.T,numpy.ones(y.shape))).T\nplt.imshow(X,interpolation='nearest',cmap='gray')\nplt.axis('auto')\n\nbeta_hat=numpy.linalg.inv(X.T.dot(X)).dot(X.T).dot(y.T)\ny_est=X.dot(beta_hat)\nplt.plot(y.T,color='blue')\nplt.plot(y_est,color='red',linewidth=2)\nprint X.shape", "Now let's make a function to repeatedly generate data and fit the model.", "def efficiency_older(X,c=None):\n if not c==None:\n c=numpy.ones((X.shape[1]))\n else:\n c=numpy.array(c)\n return 1./c.dot(numpy.linalg.inv(X.T.dot(X))).dot(c)\n\ndef efficiency(X,c=None):\n \"\"\" remove the intercept\"\"\"\n if not c==None:\n c=numpy.ones((X.shape[1]))\n else:\n c=numpy.array(c)\n return 1./numpy.trace((numpy.linalg.inv(X[:,:-1].T.dot(X[:,:-1]))))\n", "Now let's write a simulation that creates datasets with varying levels of blockiness, runs the previous function 1000 times for each level, and plots mean efficiency. Note that we don't actually need to run it 1000 times for blockiness=1, since that design is exactly the same each time.", "nruns=100\nblockiness_vals=numpy.arange(0,1.1,0.1)\nmeaneff_blockiness=numpy.zeros(len(blockiness_vals))\n\nfor b in range(len(blockiness_vals)):\n eff=numpy.zeros(nruns)\n for i in range(nruns):\n d_sim,design_sim=create_design_singlecondition(blockiness=blockiness_vals[b])\n regressor_sim,_=compute_regressor(design_sim,'spm',numpy.arange(0,len(d_sim)))\n X=numpy.vstack((regressor_sim.T,numpy.ones(y.shape))).T\n eff[i]=efficiency(X,c=[1,0])\n meaneff_blockiness[b]=numpy.mean(eff)\n\nplt.plot(blockiness_vals,meaneff_blockiness)\nplt.xlabel('blockiness')\nplt.ylabel('efficiency')\n\nX.shape", "Now let's do a similar simulation looking at the effects of varying block length between 10 seconds and 120 seconds (in steps of 10). since blockiness is 1.0 here, we only need one run per block length.", "blocklenvals=numpy.arange(10,120,1)\nmeaneff_blocklen=numpy.zeros(len(blocklenvals))\nsims=[]\nfor b in range(len(blocklenvals)):\n d_sim,design_sim=create_design_singlecondition(blocklength=blocklenvals[b],blockiness=1.)\n regressor_sim,_=compute_regressor(design_sim,'spm',numpy.arange(0,len(d_sim)))\n X=numpy.vstack((regressor_sim.T,numpy.ones(y.shape))).T\n sims.append(numpy.mean(regressor_sim))\n meaneff_blocklen[b]=efficiency(X,c=[1,0])\n\nplt.plot(blocklenvals,meaneff_blocklen)\nplt.xlabel('block length')\nplt.ylabel('efficiency')", "Now let's look at the effects of correlation between regressors. We first need to create a function to generate a design with two conditions where we can control the correlation between them.", "from mkdesign import create_design_twocondition\n\nd,des1,des2=create_design_twocondition(correlation=1.0)\nregressor1,_=compute_regressor(des1,'spm',numpy.arange(0,d.shape[0]))\nregressor2,_=compute_regressor(des2,'spm',numpy.arange(0,d.shape[0]))\n\nX=numpy.vstack((regressor1.T,regressor2.T,numpy.ones(y.shape))).T\n\nnruns=100\ncorr_vals_intended=numpy.arange(-1,1.1,0.1)\n\ncorr_vals=numpy.zeros(len(corr_vals_intended))\n\nmeaneff_corr=numpy.zeros(len(corr_vals))\nsumx=numpy.zeros(len(corr_vals))\n\nfor b in range(len(corr_vals_intended)):\n eff=numpy.zeros(nruns)\n corrs=numpy.zeros(nruns)\n for i in range(nruns):\n d_sim,des1_sim,des2_sim=create_design_twocondition(correlation=corr_vals_intended[b])\n regressor1_sim,_=compute_regressor(des1_sim,'spm',numpy.arange(0,d_sim.shape[0]))\n regressor2_sim,_=compute_regressor(des2_sim,'spm',numpy.arange(0,d_sim.shape[0]))\n X=numpy.vstack((regressor1_sim.T,regressor2_sim.T,numpy.ones(y.shape))).T\n # use contrast of first regressor\n eff[i]=efficiency(X,c=[1,0,0])\n corrs[i]=numpy.corrcoef(X.T)[0,1]\n corr_vals[b]=numpy.mean(corrs)\n sumx[b]=numpy.sum(X[:,0])\n meaneff_corr[b]=numpy.mean(eff)\n\nplt.plot(corr_vals,meaneff_corr)\nplt.xlabel('mean correlation between regressors')\nplt.ylabel('efficiency')\n", "Now let's look at efficiency of estimation of the shape of the HRF, rather than detection of the activation effect. This requires that we use a finite impulse response (FIR) model.", "d,design=create_design_singlecondition(blockiness=0.0)\nregressor,_=compute_regressor(design,'fir',numpy.arange(0,len(d)),fir_delays=numpy.arange(0,16))\nplt.imshow(regressor[:50,:],interpolation='nearest',cmap='gray')", "Now let's simulate the FIR model, and estimate the variance of the fits.", "\nnruns=100\nblockiness_vals=numpy.arange(0,1.1,0.1)\nmeaneff_fit_blockiness=numpy.zeros(len(blockiness_vals))\nmeancorr=[]\nfor b in range(len(blockiness_vals)):\n eff=numpy.zeros(nruns)\n cc=numpy.zeros(nruns)\n for i in range(nruns):\n d_sim,design_sim=create_design_singlecondition(blockiness=blockiness_vals[b])\n regressor_sim,_=compute_regressor(design_sim,'fir',\n numpy.arange(0,len(d_sim)),fir_delays=numpy.arange(0,16))\n X=numpy.vstack((regressor_sim.T,numpy.ones(regressor_sim.shape[0]))).T\n eff[i]=efficiency(X)\n cc[i]=numpy.corrcoef(X.T)[0,1]\n \n meaneff_fit_blockiness[b]=numpy.mean(eff)\n meancorr.append(numpy.mean(cc))\n\nplt.plot(blockiness_vals,meaneff_fit_blockiness)\nplt.xlabel('blockiness')\nplt.ylabel('efficiency')\n\nplt.plot(blockiness_vals,meancorr)\n\n__Exercise:__ write a function to generate random designs, and then do this a large number of times, each time estimating the efficiency. Then plot the histogram of efficiencies. " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mcs07/ChemSpiPy
examples/Getting Started.ipynb
mit
[ "ChemSpiPy: Getting Started\nBefore we start:\n\nMake sure you have installed ChemSpiPy.\nObtain a security token from the ChemSpider web site.\n\nFirst Steps\nStart by importing ChemSpider:", "from chemspipy import ChemSpider", "Then connect to ChemSpider by creating a ChemSpider instance using your security token:", "# Tip: Store your security token as an environment variable to reduce the chance of accidentally sharing it\nimport os\nmytoken = os.environ['CHEMSPIDER_SECURITY_TOKEN']\n\ncs = ChemSpider(security_token=mytoken)", "All your interaction with the ChemSpider database should now happen through this ChemSpider object, cs.\nRetrieve a Compound\nRetrieving information about a specific Compound in the ChemSpider database is simple.\nLet’s get the Compound with ChemSpider ID 2157:", "comp = cs.get_compound(2157)\ncomp", "Now we have a Compound object called comp. We can get various identifiers and calculated properties from this object:", "print(comp.molecular_formula)\nprint(comp.molecular_weight)\nprint(comp.smiles)\nprint(comp.common_name)", "Search for a name\nWhat if you don’t know the ChemSpider ID of the Compound you want? Instead use the search method:", "for result in cs.search('glucose'):\n print(result)", "The search method accepts any identifer that ChemSpider can interpret, including names, registry numbers, SMILES and InChI." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/nasa-giss/cmip6/models/giss-e2-1h/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: NASA-GISS\nSource ID: GISS-E2-1H\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:20\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nasa-giss', 'giss-e2-1h', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
martinggww/lucasenlights
MachineLearning/DataScience-Python3/PolynomialRegression.ipynb
cc0-1.0
[ "Polynomial Regression\nWhat if your data doesn't look linear at all? Let's look at some more realistic-looking page speed / purchase data:", "%matplotlib inline\nfrom pylab import *\nimport numpy as np\n\nnp.random.seed(2)\npageSpeeds = np.random.normal(3.0, 1.0, 1000)\npurchaseAmount = np.random.normal(50.0, 10.0, 1000) / pageSpeeds\n\nscatter(pageSpeeds, purchaseAmount)", "numpy has a handy polyfit function we can use, to let us construct an nth-degree polynomial model of our data that minimizes squared error. Let's try it with a 4th degree polynomial:", "x = np.array(pageSpeeds)\ny = np.array(purchaseAmount)\n\np4 = np.poly1d(np.polyfit(x, y, 4))\n", "We'll visualize our original scatter plot, together with a plot of our predicted values using the polynomial for page speed times ranging from 0-7 seconds:", "import matplotlib.pyplot as plt\n\nxp = np.linspace(0, 7, 100)\nplt.scatter(x, y)\nplt.plot(xp, p4(xp), c='r')\nplt.show()", "Looks pretty good! Let's measure the r-squared error:", "from sklearn.metrics import r2_score\n\nr2 = r2_score(y, p4(x))\n\nprint(r2)\n", "Activity\nTry different polynomial orders. Can you get a better fit with higher orders? Do you start to see overfitting, even though the r-squared score looks good for this particular data set?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
plumbwj01/Barcoding-Fraxinus
scanfasta.ipynb
apache-2.0
[ "from DNASkittleUtils.Contigs import read_contigs, Contig, write_contigs_to_file \n\ncontigs = read_contigs(\"D:\\Genomes\\Ash BATG-0.5-CLCbioSSPACE\\BATG-0.5-CLCbioSSPACE.fa\")", "Using the two contig names you sent me it's simplest to do this:", "desired_contigs = ['Contig' + str(x) for x in [1131, 3182, 39106, 110, 5958]]\ndesired_contigs", "If you have a genuinely big file then I would do the following:", "grab = [c for c in contigs if c.name in desired_contigs]\nlen(grab)", "Ya! There's two contigs.", "import os\nprint(os.getcwd())\nwrite_contigs_to_file('data2/sequences_desired.fa', grab)\n\n[c.name for c in grab[:100]]\n\nimport os\nos.path.realpath('')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fonnesbeck/PyMC3_Oslo
notebooks/6. Model Checking.ipynb
cc0-1.0
[ "Model Checking\nAfter running an MCMC simulation, sample returns a MutliTrace object containing the samples for all the stochastic and deterministic random variables. The final step in Bayesian computation is model checking, in order to ensure that inferences derived from your sample are valid. There are two components to model checking:\n\nConvergence diagnostics\nGoodness of fit\n\nConvergence diagnostics are intended to detect lack of convergence in the Markov chain Monte Carlo sample; it is used to ensure that you have not halted your sampling too early. However, a converged model is not guaranteed to be a good model. The second component of model checking, goodness of fit, is used to check the internal validity of the model, by comparing predictions from the model to the data used to fit the model. \nConvergence Diagnostics\nValid inferences from sequences of MCMC samples are based on the\nassumption that the samples are derived from the true posterior\ndistribution of interest. Theory guarantees this condition as the number\nof iterations approaches infinity. It is important, therefore, to\ndetermine the minimum number of samples required to ensure a reasonable\napproximation to the target posterior density. Unfortunately, no\nuniversal threshold exists across all problems, so convergence must be\nassessed independently each time MCMC estimation is performed. The\nprocedures for verifying convergence are collectively known as\nconvergence diagnostics.\nOne approach to analyzing convergence is analytical, whereby the\nvariance of the sample at different sections of the chain are compared\nto that of the limiting distribution. These methods use distance metrics\nto analyze convergence, or place theoretical bounds on the sample\nvariance, and though they are promising, they are generally difficult to\nuse and are not prominent in the MCMC literature. More common is a\nstatistical approach to assessing convergence. With this approach,\nrather than considering the properties of the theoretical target\ndistribution, only the statistical properties of the observed chain are\nanalyzed. Reliance on the sample alone restricts such convergence\ncriteria to heuristics. As a result, convergence cannot be guaranteed.\nAlthough evidence for lack of convergence using statistical convergence\ndiagnostics will correctly imply lack of convergence in the chain, the\nabsence of such evidence will not guarantee convergence in the chain.\nNevertheless, negative results for one or more criteria may provide some\nmeasure of assurance to users that their sample will provide valid\ninferences.\nFor most simple models, convergence will occur quickly, sometimes within\na the first several hundred iterations, after which all remaining\nsamples of the chain may be used to calculate posterior quantities. For\nmore complex models, convergence requires a significantly longer burn-in\nperiod; sometimes orders of magnitude more samples are needed.\nFrequently, lack of convergence will be caused by poor mixing. \nRecall that mixing refers to the degree to which the Markov\nchain explores the support of the posterior distribution. Poor mixing\nmay stem from inappropriate proposals (if one is using the\nMetropolis-Hastings sampler) or from attempting to estimate models with\nhighly correlated variables.", "%matplotlib inline\nimport numpy as np\nimport seaborn as sns; sns.set_context('notebook')\n\nfrom pymc3 import exp, Normal, Binomial, sample, Model\n\n# Samples for each dose level\nn = 5 * np.ones(4, dtype=int)\n# Log-dose\ndose = np.array([-.86, -.3, -.05, .73])\ndeaths = np.array([0, 1, 3, 5])\n\ndef invlogit(x):\n return exp(x) / (1 + exp(x))\n\nwith Model() as bioassay_model:\n\n # Logit-linear model parameters\n alpha = Normal('alpha', 0, 0.01)\n beta = Normal('beta', 0, 0.01)\n\n # Calculate probabilities of death\n theta = invlogit(alpha + beta * dose)\n\n # Data likelihood\n deaths = Binomial('deaths', n=n, p=theta, observed=deaths)\n\nfrom pymc3 import Metropolis\n\nwith bioassay_model:\n step = Metropolis(scaling=0.0001)\n bioassay_trace = sample(1000, step=step)\n\nfrom pymc3 import traceplot\n\ntraceplot(bioassay_trace[500:], varnames=['alpha'])", "Informal Methods\nThe most straightforward approach for assessing convergence is based on\nsimply plotting and inspecting traces and histograms of the observed\nMCMC sample. If the trace of values for each of the stochastics exhibits\nasymptotic behavior over the last $m$ iterations, this may be\nsatisfactory evidence for convergence.", "with bioassay_model:\n bioassay_trace = sample(10000)\n \ntraceplot(bioassay_trace[9000:], varnames=['beta'])", "A similar approach involves\nplotting a histogram for every set of $k$ iterations (perhaps 50-100)\nbeyond some burn in threshold $n$; if the histograms are not visibly\ndifferent among the sample intervals, this may be considered some evidence for\nconvergence. Note that such diagnostics should be carried out for each\nstochastic estimated by the MCMC algorithm, because convergent behavior\nby one variable does not imply evidence for convergence for other\nvariables in the analysis.", "import matplotlib.pyplot as plt\n\nbeta_trace = bioassay_trace['beta']\n\nfig, axes = plt.subplots(2, 5, figsize=(14,6))\naxes = axes.ravel()\nfor i in range(10):\n axes[i].hist(beta_trace[500*i:500*(i+1)])\nplt.tight_layout()", "An extension of this approach can be taken\nwhen multiple parallel chains are run, rather than just a single, long\nchain. In this case, the final values of $c$ chains run for $n$\niterations are plotted in a histogram; just as above, this is repeated\nevery $k$ iterations thereafter, and the histograms of the endpoints are\nplotted again and compared to the previous histogram. This is repeated\nuntil consecutive histograms are indistinguishable.\nAnother ad hoc method for detecting lack of convergence is to examine\nthe traces of several MCMC chains initialized with different starting\nvalues. Overlaying these traces on the same set of axes should (if\nconvergence has occurred) show each chain tending toward the same\nequilibrium value, with approximately the same variance. Recall that the\ntendency for some Markov chains to converge to the true (unknown) value\nfrom diverse initial values is called ergodicity. This property is\nguaranteed by the reversible chains constructed using MCMC, and should\nbe observable using this technique. Again, however, this approach is\nonly a heuristic method, and cannot always detect lack of convergence,\neven though chains may appear ergodic.", "with bioassay_model:\n \n bioassay_trace = sample(1000, njobs=2, start=[{'alpha':0.5}, {'alpha':5}])\n\nbioassay_trace.get_values('alpha', chains=0)[0]\n\nplt.plot(bioassay_trace.get_values('alpha', chains=0)[:200], 'r--')\nplt.plot(bioassay_trace.get_values('alpha', chains=1)[:200], 'k--')", "A principal reason that evidence from informal techniques cannot\nguarantee convergence is a phenomenon called metastability. Chains may\nappear to have converged to the true equilibrium value, displaying\nexcellent qualities by any of the methods described above. However,\nafter some period of stability around this value, the chain may suddenly\nmove to another region of the parameter space. This period\nof metastability can sometimes be very long, and therefore escape\ndetection by these convergence diagnostics. Unfortunately, there is no\nstatistical technique available for detecting metastability.\nFormal Methods\nAlong with the ad hoc techniques described above, a number of more\nformal methods exist which are prevalent in the literature. These are\nconsidered more formal because they are based on existing statistical\nmethods, such as time series analysis.\nPyMC currently includes three formal convergence diagnostic methods. The\nfirst, proposed by Geweke (1992), is a time-series approach that\ncompares the mean and variance of segments from the beginning and end of\na single chain.\n$$z = \\frac{\\bar{\\theta}_a - \\bar{\\theta}_b}{\\sqrt{S_a(0) + S_b(0)}}$$\nwhere $a$ is the early interval and $b$ the late interval, and $S_i(0)$ is the spectral density estimate at zero frequency for chain segment $i$. If the\nz-scores (theoretically distributed as standard normal variates) of\nthese two segments are similar, it can provide evidence for convergence.\nPyMC calculates z-scores of the difference between various initial\nsegments along the chain, and the last 50% of the remaining chain. If\nthe chain has converged, the majority of points should fall within 2\nstandard deviations of zero.\nIn PyMC, diagnostic z-scores can be obtained by calling the geweke function. It\naccepts either (1) a single trace, (2) a Node or Stochastic object, or\n(4) an entire Model object:", "from pymc3 import geweke\n\nwith bioassay_model:\n tr = sample(2000)\n \nz = geweke(tr, intervals=15)\n\nplt.scatter(*z['alpha'].T)\nplt.hlines([-1,1], 0, 1000, linestyles='dotted')\nplt.xlim(0, 1000)", "The arguments expected are the following:\n\nx : The trace of a variable.\nfirst : The fraction of series at the beginning of the trace.\nlast : The fraction of series at the end to be compared with the section at the beginning.\nintervals : The number of segments.\n\nPlotting the output displays the scores in series, making it is easy to\nsee departures from the standard normal assumption.\nA second convergence diagnostic provided by PyMC is the Gelman-Rubin\nstatistic Gelman and Rubin (1992). This diagnostic uses multiple chains to\ncheck for lack of convergence, and is based on the notion that if\nmultiple chains have converged, by definition they should appear very\nsimilar to one another; if not, one or more of the chains has failed to\nconverge.\nThe Gelman-Rubin diagnostic uses an analysis of variance approach to\nassessing convergence. That is, it calculates both the between-chain\nvaraince (B) and within-chain varaince (W), and assesses whether they\nare different enough to worry about convergence. Assuming $m$ chains,\neach of length $n$, quantities are calculated by:\n$$\\begin{align}B &= \\frac{n}{m-1} \\sum_{j=1}^m (\\bar{\\theta}{.j} - \\bar{\\theta}{..})^2 \\\nW &= \\frac{1}{m} \\sum_{j=1}^m \\left[ \\frac{1}{n-1} \\sum_{i=1}^n (\\theta_{ij} - \\bar{\\theta}_{.j})^2 \\right]\n\\end{align}$$\nfor each scalar estimand $\\theta$. Using these values, an estimate of\nthe marginal posterior variance of $\\theta$ can be calculated:\n$$\\hat{\\text{Var}}(\\theta | y) = \\frac{n-1}{n} W + \\frac{1}{n} B$$\nAssuming $\\theta$ was initialized to arbitrary starting points in each\nchain, this quantity will overestimate the true marginal posterior\nvariance. At the same time, $W$ will tend to underestimate the\nwithin-chain variance early in the sampling run. However, in the limit\nas $n \\rightarrow \n\\infty$, both quantities will converge to the true variance of $\\theta$.\nIn light of this, the Gelman-Rubin statistic monitors convergence using\nthe ratio:\n$$\\hat{R} = \\sqrt{\\frac{\\hat{\\text{Var}}(\\theta | y)}{W}}$$\nThis is called the potential scale reduction, since it is an estimate of\nthe potential reduction in the scale of $\\theta$ as the number of\nsimulations tends to infinity. In practice, we look for values of\n$\\hat{R}$ close to one (say, less than 1.1) to be confident that a\nparticular estimand has converged. In PyMC, the function\ngelman_rubin will calculate $\\hat{R}$ for each stochastic node in\nthe passed model:", "from pymc3 import gelman_rubin\n\ngelman_rubin(bioassay_trace)", "For the best results, each chain should be initialized to highly\ndispersed starting values for each stochastic node.\nBy default, when calling the forestplot function using nodes with\nmultiple chains, the $\\hat{R}$ values will be plotted alongside the\nposterior intervals.", "from pymc3 import forestplot\n\nforestplot(bioassay_trace)", "Autocorrelation", "from pymc3 import autocorrplot\n\nautocorrplot(tr);\n\nbioassay_trace['alpha'].shape\n\nfrom pymc3 import effective_n\n\neffective_n(bioassay_trace)", "Goodness of Fit\nChecking for model convergence is only the first step in the evaluation\nof MCMC model outputs. It is possible for an entirely unsuitable model\nto converge, so additional steps are needed to ensure that the estimated\nmodel adequately fits the data. One intuitive way of evaluating model\nfit is to compare model predictions with the observations used to fit\nthe model. In other words, the fitted model can be used to simulate\ndata, and the distribution of the simulated data should resemble the\ndistribution of the actual data.\nFortunately, simulating data from the model is a natural component of\nthe Bayesian modelling framework. Recall, from the discussion on\nimputation of missing data, the posterior predictive distribution:\n$$p(\\tilde{y}|y) = \\int p(\\tilde{y}|\\theta) f(\\theta|y) d\\theta$$\nHere, $\\tilde{y}$ represents some hypothetical new data that would be\nexpected, taking into account the posterior uncertainty in the model\nparameters. Sampling from the posterior predictive distribution is easy\nin PyMC. The code looks identical to the corresponding data stochastic,\nwith two modifications: (1) the node should be specified as\ndeterministic and (2) the statistical likelihoods should be replaced by\nrandom number generators. Consider the gelman_bioassay example, \nwhere deaths are modeled as a binomial random variable for which\nthe probability of death is a logit-linear function of the dose of a\nparticular drug.", "from pymc3 import Normal, Binomial, Deterministic, invlogit\n\n# Samples for each dose level\nn = 5 * np.ones(4, dtype=int)\n# Log-dose\ndose = np.array([-.86, -.3, -.05, .73])\n\nwith Model() as model:\n\n # Logit-linear model parameters\n alpha = Normal('alpha', 0, 0.01)\n beta = Normal('beta', 0, 0.01)\n\n # Calculate probabilities of death\n theta = Deterministic('theta', invlogit(alpha + beta * dose))\n\n # Data likelihood\n deaths = Binomial('deaths', n=n, p=theta, observed=[0, 1, 3, 5])", "The posterior predictive distribution of deaths uses the same functional\nform as the data likelihood, in this case a binomial stochastic. Here is\nthe corresponding sample from the posterior predictive distribution:", "with model:\n \n deaths_sim = Binomial('deaths_sim', n=n, p=theta, shape=4)", "Notice that the observed stochastic Binomial has been replaced with a stochastic node that is identical in every respect to deaths, except that its values are not fixed to be the observed data -- they are left to vary according to the values of the fitted parameters.\nThe degree to which simulated data correspond to observations can be evaluated in at least two ways. First, these quantities can simply be compared visually. This allows for a qualitative comparison of model-based replicates and observations. If there is poor fit, the true value of the data may appear in the tails of the histogram of replicated data, while a good fit will tend to show the true data in high-probability regions of the posterior predictive distribution. The Matplot package in PyMC provides an easy way of producing such plots, via the gof_plot function.", "with model:\n \n gof_trace = sample(2000)\n\nfrom pymc3 import forestplot\n\nforestplot(gof_trace, varnames=['deaths_sim'])", "Exercise: Meta-analysis of beta blocker effectiveness\nCarlin (1992) considers a Bayesian approach to meta-analysis, and includes the following examples of 22 trials of beta-blockers to prevent mortality after myocardial infarction.\nIn a random effects meta-analysis we assume the true effect (on a log-odds scale) $d_i$ in a trial $i$\nis drawn from some population distribution. Let $r^C_i$ denote number of events in the control group in trial $i$,\nand $r^T_i$ denote events under active treatment in trial $i$. Our model is:\n$$\\begin{aligned}\nr^C_i &\\sim \\text{Binomial}\\left(p^C_i, n^C_i\\right) \\\nr^T_i &\\sim \\text{Binomial}\\left(p^T_i, n^T_i\\right) \\\n\\text{logit}\\left(p^C_i\\right) &= \\mu_i \\\n\\text{logit}\\left(p^T_i\\right) &= \\mu_i + \\delta_i \\\n\\delta_i &\\sim \\text{Normal}(d, t) \\\n\\mu_i &\\sim \\text{Normal}(m, s)\n\\end{aligned}$$\nWe want to make inferences about the population effect $d$, and the predictive distribution for the effect $\\delta_{\\text{new}}$ in a new trial. Build a model to estimate these quantities in PyMC, and (1) use convergence diagnostics to check for convergence and (2) use posterior predictive checks to assess goodness-of-fit.\nHere are the data:", "r_t_obs = [3, 7, 5, 102, 28, 4, 98, 60, 25, 138, 64, 45, 9, 57, 25, 33, 28, 8, 6, 32, 27, 22]\nn_t_obs = [38, 114, 69, 1533, 355, 59, 945, 632, 278,1916, 873, 263, 291, 858, 154, 207, 251, 151, 174, 209, 391, 680]\nr_c_obs = [3, 14, 11, 127, 27, 6, 152, 48, 37, 188, 52, 47, 16, 45, 31, 38, 12, 6, 3, 40, 43, 39]\nn_c_obs = [39, 116, 93, 1520, 365, 52, 939, 471, 282, 1921, 583, 266, 293, 883, 147, 213, 122, 154, 134, 218, 364, 674]\nN = len(n_c_obs)\n\n# Write your answer here", "References\nGelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science. A Review Journal of the Institute of Mathematical Statistics, 457–472.\nGeweke, J., Berger, J. O., & Dawid, A. P. (1992). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments. In Bayesian Statistics 4.\nBrooks, S. P., Catchpole, E. A., & Morgan, B. J. T. (2000). Bayesian Animal Survival Estimation. Statistical Science. A Review Journal of the Institute of Mathematical Statistics, 15(4), 357–376. doi:10.1214/ss/1177010123\nGelman, A., Meng, X., & Stern, H. (1996). Posterior predicitive assessment of model fitness via realized discrepencies with discussion. Statistica Sinica, 6, 733–807.\nRaftery, A., & Lewis, S. (1992). One long run with diagnostics: Implementation strategies for Markov chain Monte Carlo. Statistical Science. A Review Journal of the Institute of Mathematical Statistics, 7, 493–497.\nCrossValidated: How to use scikit-learn's cross validation functions on multi-label classifiers" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
letsgoexploring/teaching
winter2017/econ129/python/Econ129_Class_09.ipynb
mit
[ "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n%matplotlib inline", "Class 9: The Solow growth model\nThe Solow growth model is at the core of modern theories of growth and business cycles. The Solow model is a model of exogenous growth: long-run growth arises in the model as a consequence of exogenous growth in the labor supply and total factor productivity. The Solow model, like many other macroeconomic models, is a time series model.\nThe Solow model without exogenous growth\nFor the moment, let's disregard population and total factor productivity growth and assume that equilibrium in a closed economy is described by the following four equations:\n\\begin{align}\nY_t & = A K_t^{\\alpha} \\tag{1}\\\nC_t & = (1-s)Y_t \\tag{2}\\\nY_t & = C_t + I_t \\tag{3}\\\nK_{t+1} & = I_t + ( 1- \\delta)K_t \\tag{4}\\\n\\end{align}\nEquation (1) is the production function. Equation (2) is the consumption function where $s$ denotes the exogenously given saving rate. Equation (3) is the aggregate market clearing condition. Finally, Equation (4) is the capital evolution equation specifying that capital in yeat $t+1$ is the sum of newly created capital $I_t$ and the capital stock from year $t$ that has not depreciated $(1-\\delta)K_t$.\nCombine Equations (1) through (4) to eliminate $C_t$, $I_t$, and $Y_t$ and obtain a single-variable recurrence relation for $K_{t+1}$:\n\\begin{align}\nK_{t+1} & = sAK_t^{\\alpha} + ( 1- \\delta)K_t \\tag{5}\n\\end{align}\nGiven an initial value for capital $K_0 >0$, iterate on Equation (5) to compute the value of the capital stock at some future date $T$. Furthermore, the values of consumption, output, and investment at date $T$ can also be computed using Equations (1) through (3).\nSimulation\nSimulate the Solow growth model for $t=0\\ldots 100$. For the simulation, assume the following values of the parameters:\n\\begin{align}\nA & = 10\\\n\\alpha & = 0.35\\\ns & = 0.15\\\n\\delta & = 0.1\n\\end{align}\nFurthermore, suppose that the initial value of capital is:\n\\begin{align}\nK_0 & = 20\n\\end{align}", "# Initialize parameters for the simulation (A, s, T, delta, alpha, K0)\n\n\n# Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to K0\n\n\n# Compute all capital values by iterating over t from 0 through T\n \n\n# Print the value of capital at dates 0 and T\n\n\n# Store the simulated capital data in a pandas DataFrame called data\n\n\n# Print the first five rows of the DataFrame\n\n\n# Create columns in the DataFrame to store computed values of the other endogenous variables\n\n\n# Print the first row of the DataFrame\n\n\n# Print the last row of the DataFrame\n\n\n# Create a 2x2 grid of plots of capital, output, consumption, and investment\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(2,2,1)\nax.plot(data['capital'],lw=3)\nax.grid()\nax.set_title('Capital')\n", "The Solow model with exogenous population growth\nNow, let's suppose that production is a function of the supply of labor $L_t$:\n\\begin{align}\nY_t & = AK_t^{\\alpha} L_t^{1-\\alpha}\\tag{6}\n\\end{align}\nThe supply of labor grows at an exogenously determined rate $n$ and so it's value is determined recursively by a first-order difference equation:\n\\begin{align}\nL_{t+1} & = (1+n) L_t \\tag{7}\n\\end{align}\nThe rest of the economy is characterized by the same equations as before:\n\\begin{align}\nC_t & = (1-s)Y_t \\tag{8}\\\nY_t & = C_t + I_t \\tag{9}\\\nK_{t+1} & = I_t + ( 1- \\delta)K_t \\tag{10}\\\n\\end{align}\nCombine Equations (6), (8), (9), and (10) to eliminate $C_t$, $I_t$, and $Y_t$ and obtain a recurrence relation specifying $K_{t+1}$ as a funtion of $K_t$ and $L_t$:\n\\begin{align}\nK_{t+1} & = sAK_t^{\\alpha}L_t^{1-\\alpha} + ( 1- \\delta)K_t \\tag{11}\n\\end{align}\nGiven an initial values for capital and labor, Equations (7) and (11) can be iterated on to compute the values of the capital stock and labor supply at some future date $T$. Furthermore, the values of consumption, output, and investment at date $T$ can also be computed using Equations (6), (8), (9), and (10).\nSimulation\nSimulate the Solow growth model with exogenous labor growth for $t=0\\ldots 100$. For the simulation, assume the following values of the parameters:\n\\begin{align}\nA & = 10\\\n\\alpha & = 0.35\\\ns & = 0.15\\\n\\delta & = 0.1\\\nn & = 0.01\n\\end{align}\nFurthermore, suppose that the initial values of capital and labor are:\n\\begin{align}\nK_0 & = 20\\\nL_0 & = 1\n\\end{align}", "# Initialize parameters for the simulation (A, s, T, delta, alpha, n, K0, L0)\n\n\n\n# Initialize a variable called labor as a (T+1)x1 array of zeros and set first value to L0\n\n\n# Compute all labor values by iterating over t from 0 through T\n \n \n# Plot the simulated labor series\n\n\n# Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to K0\n\n\n# Compute all capital values by iterating over t from 0 through T\n\n \n\n# Plot the simulated capital series\n\n\n# Store the simulated capital data in a pandas DataFrame called data_labor\n\n\n# Print the first five rows of the data_labor\n\n\n# Create columns in the DataFrame to store computed values of the other endogenous variables\n\n# Print the first five rows of data_labor\n\n\n# Create columns in the DataFrame to store capital per worker, output per worker, consumption per worker, and investment per worker\n\n\n# Print the first five rows of data_labor\n\n\n# Create a 2x2 grid of plots of capital, output, consumption, and investment\n\n\n# Create a 2x2 grid of plots of capital per worker, outputper worker, consumption per worker, and investment per worker\n", "An alternative approach\nSuppose that we wanted to simulate the Solow model with different parameter values so that we could compare the simulations. Since we'd be doing the same basic steps multiple times using different numbers, it would make sense to define a function so that we could avoid repetition.\nThe code below defines a function called solow_example() that simulates the Solow model with exogenous labor growth. solow_example() takes as arguments the parameters of the Solow model $A$, $\\alpha$, $\\delta$, $s$, and $n$; the initial values $K_0$ and $L_0$; and the number of simulation periods $T$. solow_example() returns a Pandas DataFrame with computed values for aggregate and per worker quantities.", "def solow_example(A,alpha,delta,s,n,K0,L0,T):\n '''Returns DataFrame with simulated values for a Solow model with labor growth and constant TFP'''\n \n # Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to k0\n capital = np.zeros(T+1)\n capital[0] = K0\n \n # Initialize a variable called labor as a (T+1)x1 array of zeros and set first value to l0\n labor = np.zeros(T+1)\n labor[0] = L0\n\n\n # Compute all capital and labor values by iterating over t from 0 through T\n for t in np.arange(T):\n labor[t+1] = (1+n)*labor[t]\n capital[t+1] = s*A*capital[t]**alpha*labor[t]**(1-alpha) + (1-delta)*capital[t]\n \n # Store the simulated capital df in a pandas DataFrame called data\n df = pd.DataFrame({'capital':capital,'labor':labor})\n \n # Create columns in the DataFrame to store computed values of the other endogenous variables\n df['output'] = df['capital']**alpha*df['labor']**(1-alpha)\n df['consumption'] = (1-s)*df['output']\n df['investment'] = df['output'] - df['consumption']\n \n # Create columns in the DataFrame to store capital per worker, output per worker, consumption per worker, and investment per worker\n df['capital_pw'] = df['capital']/df['labor']\n df['output_pw'] = df['output']/df['labor']\n df['consumption_pw'] = df['consumption']/df['labor']\n df['investment_pw'] = df['investment']/df['labor']\n \n return df", "With solow_example() defined, we can redo the previous exercise quickly:", "# Create the DataFrame with simulated values\ndf = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=20,L0=1,T=100)\n\n# Create a 2x2 grid of plots of the capital per worker, outputper worker, consumption per worker, and investment per worker\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(2,2,1)\nax.plot(df['capital_pw'],lw=3)\nax.grid()\nax.set_title('Capital per worker')\n\nax = fig.add_subplot(2,2,2)\nax.plot(df['output_pw'],lw=3)\nax.grid()\nax.set_title('Output per worker')\n\nax = fig.add_subplot(2,2,3)\nax.plot(df['consumption_pw'],lw=3)\nax.grid()\nax.set_title('Consumption per worker')\n\nax = fig.add_subplot(2,2,4)\nax.plot(df['investment_pw'],lw=3)\nax.grid()\nax.set_title('Investment per worker')", "solow_example() can be used to perform multiple simulations. For example, suppose we want to see the effect of having two different initial values of capital: $k_0 = 20$ and $k_0'=10$.", "df1 = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=20,L0=1,T=100)\ndf2 = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=10,L0=1,T=100)\n\n# Create a 2x2 grid of plots of the capital per worker, outputper worker, consumption per worker, and investment per worker\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(2,2,1)\nax.plot(df1['capital_pw'],lw=3)\nax.plot(df2['capital_pw'],lw=3)\nax.grid()\nax.set_title('Capital per worker')\n\nax = fig.add_subplot(2,2,2)\nax.plot(df1['output_pw'],lw=3)\nax.plot(df2['output_pw'],lw=3)\nax.grid()\nax.set_title('Output per worker')\n\nax = fig.add_subplot(2,2,3)\nax.plot(df1['consumption_pw'],lw=3)\nax.plot(df2['consumption_pw'],lw=3)\nax.grid()\nax.set_title('Consumption per worker')\n\nax = fig.add_subplot(2,2,4)\nax.plot(df1['investment_pw'],lw=3,label='$k_0=20$')\nax.plot(df2['investment_pw'],lw=3,label='$k_0=10$')\nax.grid()\nax.set_title('Investment per worker')\nax.legend(loc='lower right')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
solved - 03b - Some more advanced indexing.ipynb
bsd-2-clause
[ "Advanced indexing", "%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\ntry:\n import seaborn\nexcept ImportError:\n pass\n\npd.options.display.max_rows = 10", "This dataset is borrowed from the PyCon tutorial of Brandon Rhodes (so all credit to him!). You can download these data from here: titles.csv and cast.csv and put them in the /data folder.", "cast = pd.read_csv('data/cast.csv')\ncast.head()\n\ntitles = pd.read_csv('data/titles.csv')\ntitles.head()", "Setting columns as the index\nWhy is it useful to have an index?\n\nGiving meaningful labels to your data -> easier to remember which data are where\nUnleash some powerful methods, eg with a DatetimeIndex for time series\nEasier and faster selection of data\n\nIt is this last one we are going to explore here!\nSetting the title column as the index:", "c = cast.set_index('title')\n\nc.head()", "Instead of doing:", "%%time\ncast[cast['title'] == 'Hamlet']", "we can now do:", "%%time\nc.loc['Hamlet']", "But you can also have multiple columns as the index, leading to a multi-index or hierarchical index:", "c = cast.set_index(['title', 'year'])\n\nc.head()\n\n%%time\nc.loc[('Hamlet', 2000),:]\n\nc2 = c.sort_index()\n\n%%time\nc2.loc[('Hamlet', 2000),:]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
scjrobertson/xRange
tracking/tracking.ipynb
gpl-3.0
[ "Two Object Tracking\nSummary of notebook\n\n<b> Kalman filter: PGM implementation </b>\nNearly identical to standard implementation.\nThis section is just a basis for comparison.\n\n\n<b> Simulation of the two object tracking </b>\nIt tracks the objects, but the likelihoods seem incorrect.", "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport numpy as np\nfrom matplotlib import pylab as plt\nfrom mpl_toolkits import mplot3d\nfrom canonical_gaussian import CanonicalGaussian as CG\nfrom gaussian_mixture import GaussianMixtureModel as GMM\nfrom calc_traj import calc_traj\nfrom range_doppler import *\nfrom util import *\n\nnp.set_printoptions(precision=2)", "Target information", "names, p, v, w = load_clubs('clubs.csv')\n\ncpi = 40e-3\nT = 12\nt_sim = np.arange(0, T, cpi)\n\nt1, p1, v1 = calc_traj(p[0, :], v[0, :], w[0, :], t_sim)\nt2, p2, v2 = calc_traj(p[-1, :], v[-1, :], w[-1, :], t_sim)\n\nsensor_locations = np.array([[-10, 28.5, 1], [-15, 30.3, 3],\n [200, 30, 1.5], [220, -31, 2],\n [-30, 0, 0.5], [150, 10, 0.6]])\n\nrd_1 = range_doppler(sensor_locations, p1, v1)\npm_1 = multilateration(sensor_locations, rd_1[:, :, 1])\nvm_1 = determine_velocity(t1, pm_1, rd_1[:, :, 0])\n\nrd_2 = range_doppler(sensor_locations, p2, v2)\npm_2 = multilateration(sensor_locations, rd_2[:, :, 1])\nvm_2 = determine_velocity(t2, pm_2, rd_2[:, :, 0]) ", "The Kalman Filter Model", "N = 6\nif pm_1.shape < pm_2.shape: \n M, _ = pm_1.shape\n pm_2 = pm_2[:M]\n vm_2 = pm_2[:M]\nelse:\n M, _ = pm_2.shape\n pm_1 = pm_1[:M]\n vm_1 = vm_2[:M]\n \nprint(M)\ndt = cpi\ng = 9.81\n\nsigma_r = 2.5\nsigma_q = 0.5\nprior_var = 1", "Motion and measurement models", "A = np.identity(N)\nA[0, 3] = A[1, 4] = A[2, 5] = dt \n\nB = np.zeros((N, N))\nB[2, 2] = B[5, 5] = 1\n\nR = np.identity(N)*sigma_r\n\nC = np.identity(N)\nQ = np.identity(N)*sigma_q\n\nu = np.zeros((6, 1))\nu[2] = -0.5*g*(dt**2)\nu[5] = -g*dt", "Priors", "#Object 1\nmu0_1 = np.zeros((N, 1))\nmu0_1[:3, :] = p1[0, :].reshape(3, 1)\nmu0_1[3:, :] = v[0, :].reshape(3, 1)\n\nprec0_1 = np.linalg.inv(prior_var*np.identity(N))\nh0_1 = (prec0_1)@(mu0_1)\ng0_1 = -0.5*(mu0_1.T)@(prec0_1)@(mu0_1) -3*np.log(2*np.pi)\n\n#Object 2\nmu0_2 = np.zeros((N, 1))\nmu0_2[:3, :] = p2[0, :].reshape(3, 1)\nmu0_2[3:, :] = v2[0, :].reshape(3, 1)\n\nprec0_2 = np.linalg.inv(prior_var*np.identity(N))\nh0_2 = (prec0_2)@(mu0_2)\ng0_2 = -0.5*(mu0_2.T)@(prec0_2)@(mu0_2) -3*np.log(2*np.pi)\n\nprint(h0_1)", "Linear Kalman Filtering\nCreating the model", "z_t = np.empty((M, N))\n\nz_t[:, :3] = pm_1\nz_t[:, 3:] = vm_1\n\nR_in = np.linalg.inv(R)\nP_pred = np.bmat([[R_in, -(R_in)@(A)], [-(A.T)@(R_in), (A.T)@(R_in)@(A)]])\nM_pred = np.zeros((2*N, 1))\nM_pred[:N, :] = (B)@(u)\n\nh_pred = (P_pred)@(M_pred)\ng_pred = -0.5*(M_pred.T)@(P_pred)@(M_pred).flatten() -0.5*np.log( np.linalg.det(2*np.pi*R))\n\nQ_in = np.linalg.inv(Q)\nP_meas = np.bmat([[(C.T)@(Q_in)@(C), -(C.T)@(Q_in)], [-(Q_in)@(C), Q_in]])\n\nh_meas = np.zeros((2*N, 1))\ng_meas = -0.5*np.log( np.linalg.det(2*np.pi*Q))\n\nL, _ = z_t.shape \n\nX = np.arange(0, L)\nZ = np.arange(L-1, 2*L-1)\n\nC_X = [CG([X[0]], [N], h0_1, prec0_1, g0_1)]\nC_Z = [CG([X[0]], [N], h0_1, prec0_1, g0_1)]\n\nfor i in np.arange(1, L):\n C_X.append(CG([X[i], X[i-1]], [N, N], h_pred, P_pred, g_pred))\n C_Z.append(CG([X[i], Z[i]], [N, N], h_meas, P_meas, g_meas))", "The Kalman Filter algorithm: Gaussian belief propagation", "message_out = [C_X[0]]\nprediction = [C_X[0]]\n\nmean = np.zeros((N, L))\n\nfor i in np.arange(1, L):\n #Kalman Filter Algorithm\n C_Z[i].introduce_evidence([Z[i]], z_t[i, :])\n marg = (message_out[i-1]*C_X[i]).marginalize([X[i-1]])\n message_out.append(marg*C_Z[i]) \n \n mean[:, i] = (np.linalg.inv(message_out[i]._prec)@(message_out[i]._info)).reshape((N, ))\n \n #For plotting only\n prediction.append(marg)\n\np_e = mean[:3, :]\n\nfig = plt.figure(figsize=(25, 25))\nax = plt.axes(projection='3d')\n\nax.plot(p1[:, 0], p1[:, 1], p1[:, 2])\nax.plot(p_e[0, :], p_e[1, :], p_e[2, :], 'or')\nax.set_xlabel('x (m)', fontsize = '20')\nax.set_ylabel('y (m)', fontsize = '20')\nax.set_zlabel('z (m)', fontsize = '20')\nax.set_title('Kalman Filtering', fontsize = '20')\nax.set_ylim([-1, 1])\nax.legend(['Actual Trajectory', 'Estimated trajectory'])\nplt.show()\n\nD = 100\n\nt = np.linspace(0, 2*np.pi, D)\nxz = np.array([[np.cos(t)], [np.sin(t)]]).reshape((2, D))\n\ngaussians = message_out + prediction + C_Z\nellipses = []\n\nfor g in gaussians: \n g._vars = [1, 2, 3, 4]\n g._dims = [1, 1, 1, 3]\n \n c = g.marginalize([2, 4])\n \n cov = np.linalg.inv(c._prec)\n mu = (cov)@(c._info)\n \n U, S, _ = np.linalg.svd(cov)\n L = np.diag(np.sqrt(S))\n \n ellipses.append(np.dot((U)@(L), xz) + mu)\n\nfor i in np.arange(0, M):\n plt.figure(figsize= (15, 15))\n \n message_out = ellipses[i]\n prediction = ellipses[i+M]\n measurement = ellipses[i+2*M]\n \n plt.plot(p1[:, 0], p1[:, 2], 'k--', label='Trajectory')\n plt.plot(message_out[0, :], message_out[1, :], 'r', label='After measurement update')\n plt.plot(prediction[0, :], prediction[1, :], 'b', label = 'Recursive prediction')\n plt.plot(measurement[0, :], measurement[1, :], 'g', label='Measurement')\n \n plt.xlim([-3.5, 250])\n plt.ylim([-3.5, 35])\n plt.grid(True)\n \n plt.xlabel('x (m)')\n plt.ylabel('z (m)')\n plt.legend(loc='upper left')\n plt.title('x-z position for t = %d'%i)\n \n plt.savefig('images/kalman/%d.png'%i, format = 'png')\n plt.close()", "<img src=\"images/kalman/kalman_hl.gif\">\nTwo object tracking", "fig = plt.figure(figsize=(25, 25))\nax = plt.axes(projection='3d')\n\nax.plot(p1[:, 0], p1[:, 1], p1[:, 2])\nax.plot(p2[:, 0], p2[:, 1], p2[:, 2], 'or')\nax.set_xlabel('x (m)', fontsize = '20')\nax.set_ylabel('y (m)', fontsize = '20')\nax.set_zlabel('z (m)', fontsize = '20')\nax.set_title('', fontsize = '20')\nax.set_ylim([-20, 20])\nax.legend(['Target 1', 'Target 2'])\nplt.show()\n\nL = 10\n\nX_1 = np.arange(0, L).tolist()\nX_2 = np.arange(L, 2*L).tolist()\nZ_1 = np.arange(2*L, 3*L).tolist()\nZ_2 = np.arange(3*L, 4*L).tolist()\n\nz_1 = np.empty((M, N))\nz_1[:, :3] = pm_1\nz_1[:, 3:] = vm_1\n\nz_2 = np.empty((M, N))\nz_2[:, :3] = pm_2\nz_2[:, 3:] = vm_2\n\nC_X = [CG([X_1[0]], [N], h0_1, prec0_1, g0_1)*CG([X_2[0]], [N], h0_2, prec0_2, g0_2)]\n\nfor i in np.arange(1, L):\n C_X.append(CG([X_1[i], X_1[i-1]], [N, N], h_pred, P_pred, g_pred)\n *CG([X_2[i], X_2[i-1]], [N, N], h_pred, P_pred, g_pred))\n\nC_Z = [None]\n\nZ_11 = CG([X_1[1], Z_1[1]], [N, N], h_meas, P_meas, g_meas)\nZ_11.introduce_evidence([Z_1[1]], z_1[1, :])\n\nZ_22 = CG([X_2[1], Z_2[1]], [N, N], h_meas, P_meas, g_meas)\nZ_22.introduce_evidence([Z_2[1]], z_2[1, :])\n\nC_Z.append(Z_11*Z_22)\n\nfor i in np.arange(2, L):\n Z_11 = CG([X_1[i], Z_1[i]], [N, N], h_meas, P_meas, g_meas)\n Z_11.introduce_evidence([Z_1[i]], z_1[i, :])\n \n Z_22 = CG([X_2[i], Z_2[i]], [N, N], h_meas, P_meas, g_meas)\n Z_22.introduce_evidence([Z_2[i]], z_2[i, :])\n \n Z_12 = CG([X_1[i], Z_2[i]], [N, N], h_meas, P_meas, g_meas)\n Z_12.introduce_evidence([Z_2[i]] ,z_2[i, :])\n \n Z_21 = CG([X_2[i], Z_1[i]], [N, N], h_meas, P_meas, g_meas)\n Z_21.introduce_evidence([Z_1[i]], z_1[i, :])\n \n C_Z.append(GMM([0.5*(Z_11*Z_22), 0.5*(Z_12*Z_21)]))\n\npredict = [C_X[0]]\n\nfor i in np.arange(1, L):\n marg = (C_X[i]*predict[i-1]).marginalize([X_1[i-1], X_2[i-1]])\n predict.append(C_Z[i]*marg)\n\nD = 100\n\nt = np.linspace(0, 2*np.pi, D)\nxz = np.array([[np.cos(t)], [np.sin(t)]]).reshape((2, D))\n\nellipses = []\nnorms = []\n\ni = 0\nfor p in predict:\n \n if isinstance(p, GMM):\n mix = p._mix\n else:\n mix = [p]\n \n time_step = []\n \n for m in mix:\n m._vars = [1, 2, 3, 4]\n m._dims = [1, 1, 1, 9]\n \n c = m.marginalize([2, 4])\n \n cov = np.linalg.inv(c._prec)\n mu = (cov)@(c._info)\n \n if i == 0: \n print(cov)\n i = 1\n \n U, S, _ = np.linalg.svd(cov)\n lambda_ = np.diag(np.sqrt(S))\n \n norms.append(c._norm)\n time_step.append(np.dot((U)@(lambda_), xz) + mu)\n \n ellipses.append(time_step)\n\nfor i in np.arange(0, L):\n plt.figure(figsize= (15, 15))\n \n plt.plot(p1[1:, 0], p1[1:, 2], 'or', label='Trajectory 1')\n plt.plot(p2[1:, 0], p2[1:, 2], 'og', label='Trajectory 2')\n \n for e in ellipses[i]:\n plt.plot(e[0, :], e[1, :], 'b')\n \n plt.xlim([-3.5, 25])\n plt.ylim([-3.5, 15])\n plt.grid(True)\n \n plt.legend(loc='upper left')\n plt.xlabel('x (m)')\n plt.ylabel('z (m)')\n plt.title('x-z position for t = %d'%(i))\n \n plt.savefig('images/two_objects/%d.png'%i, format = 'png')\n plt.close()", "<img src=\"images/two_objects/two_objects_pd.gif\">" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fivetentaylor/rpyca
RPCA_Testing-3d.ipynb
mit
[ "Robust PCA Example\nRobust PCA is an awesome relatively new method for factoring a matrix into a low rank component and a sparse component. This enables really neat applications for outlier detection, or models that are robust to outliers.", "%matplotlib inline", "Make Some Toy Data", "import matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy as np\n\nx = np.random.randn(100) * 5\ny = np.random.randn(100)\nz = np.random.randn(100)\npoints = np.vstack([y,x,z])", "Add Some Outliers to Make Life Difficult", "outliers = np.tile([15,-10,10], 10).reshape((-1,3))\n\npts = np.vstack([points.T, outliers]).T", "Compute SVD on both the clean data and the outliery data", "U,s,Vt = np.linalg.svd(points)\nU_n,s_n,Vt_n = np.linalg.svd(pts)", "Just 10 outliers can really screw up our line fit!", "def randrange(n, vmin, vmax):\n return (vmax-vmin)*np.random.rand(n) + vmin\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nn = 100 \nfor c, m, zl, zh in [('r', 'o', -50, -25), ('b', '^', -30, -5)]:\n xs = randrange(n, 23, 32) \n ys = randrange(n, 0, 100)\n zs = randrange(n, zl, zh) \n ax.scatter(xs, ys, zs, c=c, marker=m)\n\nax.set_xlabel('X Label')\nax.set_ylabel('Y Label')\nax.set_zlabel('Z Label')\n\nplt.show()", "Now the robust pca version!", "import rpca\n\nreload(rpca)\n\nimport logging\nlogger = logging.getLogger(rpca.__name__)\nlogger.setLevel(logging.INFO)", "Factor the matrix into L (low rank) and S (sparse) parts", "L,S = rpca.rpca(pts, eps=0.0000001, r=1)", "Run SVD on the Low Rank Part", "U,s,Vt = np.linalg.svd(L)", "And have a look at this!", "plt.ylim([-20,20])\nplt.xlim([-20,20])\nplt.scatter(*pts)\npts0 = np.dot(U[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))\nplt.plot(*pts0)\nplt.scatter(*L, c='red')", "Have a look at the factored components...", "plt.ylim([-20,20])\nplt.xlim([-20,20])\nplt.scatter(*L)\nplt.scatter(*S, c='red')", "It really does add back to the original matrix!", "plt.ylim([-20,20])\nplt.xlim([-20,20])\nplt.scatter(*(L+S))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ryan-leung/PHYS4650_Python_Tutorial
notebooks/Feb2017/Astronomy/Astropy - Load fits.ipynb
bsd-3-clause
[ "from astropy.io import fits\nfrom astropy.utils.data import download_file\n\nimage_file = download_file('http://data.astropy.org/tutorials/FITS-images/HorseHead.fits', cache=True)", "We can open the fits file by fits.open() and check the info of the fits file by .info()", "hdu_list = fits.open(image_file)\nhdu_list.info()", "We get the data by the following command", "image_data = hdu_list[0].data\nprint image_data", "We get the header by the following command", "image_header = hdu_list[0].header\nprint image_header.items", "We can get individual header items by calling it as dictionary", "print image_header['CRVAL1']\nprint image_header['CRVAL2']" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ChadFulton/statsmodels
examples/notebooks/contrasts.ipynb
bsd-3-clause
[ "Contrasts Overview", "from __future__ import print_function\nimport numpy as np\nimport statsmodels.api as sm", "This document is based heavily on this excellent resource from UCLA http://www.ats.ucla.edu/stat/r/library/contrast_coding.htm\nA categorical variable of K categories, or levels, usually enters a regression as a sequence of K-1 dummy variables. This amounts to a linear hypothesis on the level means. That is, each test statistic for these variables amounts to testing whether the mean for that level is statistically significantly different from the mean of the base category. This dummy coding is called Treatment coding in R parlance, and we will follow this convention. There are, however, different coding methods that amount to different sets of linear hypotheses.\nIn fact, the dummy coding is not technically a contrast coding. This is because the dummy variables add to one and are not functionally independent of the model's intercept. On the other hand, a set of contrasts for a categorical variable with k levels is a set of k-1 functionally independent linear combinations of the factor level means that are also independent of the sum of the dummy variables. The dummy coding isn't wrong per se. It captures all of the coefficients, but it complicates matters when the model assumes independence of the coefficients such as in ANOVA. Linear regression models do not assume independence of the coefficients and thus dummy coding is often the only coding that is taught in this context.\nTo have a look at the contrast matrices in Patsy, we will use data from UCLA ATS. First let's load the data.\nExample Data", "import pandas as pd\nurl = 'https://stats.idre.ucla.edu/stat/data/hsb2.csv'\nhsb2 = pd.read_table(url, delimiter=\",\")\n\nhsb2.head(10)", "It will be instructive to look at the mean of the dependent variable, write, for each level of race ((1 = Hispanic, 2 = Asian, 3 = African American and 4 = Caucasian)).", "hsb2.groupby('race')['write'].mean()", "Treatment (Dummy) Coding\nDummy coding is likely the most well known coding scheme. It compares each level of the categorical variable to a base reference level. The base reference level is the value of the intercept. It is the default contrast in Patsy for unordered categorical factors. The Treatment contrast matrix for race would be", "from patsy.contrasts import Treatment\nlevels = [1,2,3,4]\ncontrast = Treatment(reference=0).code_without_intercept(levels)\nprint(contrast.matrix)", "Here we used reference=0, which implies that the first level, Hispanic, is the reference category against which the other level effects are measured. As mentioned above, the columns do not sum to zero and are thus not independent of the intercept. To be explicit, let's look at how this would encode the race variable.", "hsb2.race.head(10)\n\nprint(contrast.matrix[hsb2.race-1, :][:20])\n\nsm.categorical(hsb2.race.values)", "This is a bit of a trick, as the race category conveniently maps to zero-based indices. If it does not, this conversion happens under the hood, so this won't work in general but nonetheless is a useful exercise to fix ideas. The below illustrates the output using the three contrasts above", "from statsmodels.formula.api import ols\nmod = ols(\"write ~ C(race, Treatment)\", data=hsb2)\nres = mod.fit()\nprint(res.summary())", "We explicitly gave the contrast for race; however, since Treatment is the default, we could have omitted this.\nSimple Coding\nLike Treatment Coding, Simple Coding compares each level to a fixed reference level. However, with simple coding, the intercept is the grand mean of all the levels of the factors. Patsy doesn't have the Simple contrast included, but you can easily define your own contrasts. To do so, write a class that contains a code_with_intercept and a code_without_intercept method that returns a patsy.contrast.ContrastMatrix instance", "from patsy.contrasts import ContrastMatrix\n\ndef _name_levels(prefix, levels):\n return [\"[%s%s]\" % (prefix, level) for level in levels]\n\nclass Simple(object):\n def _simple_contrast(self, levels):\n nlevels = len(levels)\n contr = -1./nlevels * np.ones((nlevels, nlevels-1))\n contr[1:][np.diag_indices(nlevels-1)] = (nlevels-1.)/nlevels\n return contr\n\n def code_with_intercept(self, levels):\n contrast = np.column_stack((np.ones(len(levels)),\n self._simple_contrast(levels)))\n return ContrastMatrix(contrast, _name_levels(\"Simp.\", levels))\n\n def code_without_intercept(self, levels):\n contrast = self._simple_contrast(levels)\n return ContrastMatrix(contrast, _name_levels(\"Simp.\", levels[:-1]))\n\nhsb2.groupby('race')['write'].mean().mean()\n\ncontrast = Simple().code_without_intercept(levels)\nprint(contrast.matrix)\n\nmod = ols(\"write ~ C(race, Simple)\", data=hsb2)\nres = mod.fit()\nprint(res.summary())", "Sum (Deviation) Coding\nSum coding compares the mean of the dependent variable for a given level to the overall mean of the dependent variable over all the levels. That is, it uses contrasts between each of the first k-1 levels and level k In this example, level 1 is compared to all the others, level 2 to all the others, and level 3 to all the others.", "from patsy.contrasts import Sum\ncontrast = Sum().code_without_intercept(levels)\nprint(contrast.matrix)\n\nmod = ols(\"write ~ C(race, Sum)\", data=hsb2)\nres = mod.fit()\nprint(res.summary())", "This corresponds to a parameterization that forces all the coefficients to sum to zero. Notice that the intercept here is the grand mean where the grand mean is the mean of means of the dependent variable by each level.", "hsb2.groupby('race')['write'].mean().mean()", "Backward Difference Coding\nIn backward difference coding, the mean of the dependent variable for a level is compared with the mean of the dependent variable for the prior level. This type of coding may be useful for a nominal or an ordinal variable.", "from patsy.contrasts import Diff\ncontrast = Diff().code_without_intercept(levels)\nprint(contrast.matrix)\n\nmod = ols(\"write ~ C(race, Diff)\", data=hsb2)\nres = mod.fit()\nprint(res.summary())", "For example, here the coefficient on level 1 is the mean of write at level 2 compared with the mean at level 1. Ie.,", "res.params[\"C(race, Diff)[D.1]\"]\nhsb2.groupby('race').mean()[\"write\"][2] - \\\n hsb2.groupby('race').mean()[\"write\"][1]", "Helmert Coding\nOur version of Helmert coding is sometimes referred to as Reverse Helmert Coding. The mean of the dependent variable for a level is compared to the mean of the dependent variable over all previous levels. Hence, the name 'reverse' being sometimes applied to differentiate from forward Helmert coding. This comparison does not make much sense for a nominal variable such as race, but we would use the Helmert contrast like so:", "from patsy.contrasts import Helmert\ncontrast = Helmert().code_without_intercept(levels)\nprint(contrast.matrix)\n\nmod = ols(\"write ~ C(race, Helmert)\", data=hsb2)\nres = mod.fit()\nprint(res.summary())", "To illustrate, the comparison on level 4 is the mean of the dependent variable at the previous three levels taken from the mean at level 4", "grouped = hsb2.groupby('race')\ngrouped.mean()[\"write\"][4] - grouped.mean()[\"write\"][:3].mean()", "As you can see, these are only equal up to a constant. Other versions of the Helmert contrast give the actual difference in means. Regardless, the hypothesis tests are the same.", "k = 4\n1./k * (grouped.mean()[\"write\"][k] - grouped.mean()[\"write\"][:k-1].mean())\nk = 3\n1./k * (grouped.mean()[\"write\"][k] - grouped.mean()[\"write\"][:k-1].mean())", "Orthogonal Polynomial Coding\nThe coefficients taken on by polynomial coding for k=4 levels are the linear, quadratic, and cubic trends in the categorical variable. The categorical variable here is assumed to be represented by an underlying, equally spaced numeric variable. Therefore, this type of encoding is used only for ordered categorical variables with equal spacing. In general, the polynomial contrast produces polynomials of order k-1. Since race is not an ordered factor variable let's use read as an example. First we need to create an ordered categorical from read.", "hsb2['readcat'] = np.asarray(pd.cut(hsb2.read, bins=3))\nhsb2.groupby('readcat').mean()['write']\n\nfrom patsy.contrasts import Poly\nlevels = hsb2.readcat.unique().tolist()\ncontrast = Poly().code_without_intercept(levels)\nprint(contrast.matrix)\n\nmod = ols(\"write ~ C(readcat, Poly)\", data=hsb2)\nres = mod.fit()\nprint(res.summary())", "As you can see, readcat has a significant linear effect on the dependent variable write but not a significant quadratic or cubic effect." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
flaxandteal/python-course-lecturer-notebooks
04 - interactions.ipynb
mit
[ "<table style=\"float:left; border:none\">\n <tr style=\"border:none\">\n <td style=\"border:none\">\n <a href=\"http://bokeh.pydata.org/\"> \n <img \n src=\"http://bokeh.pydata.org/en/latest/_static/bokeh-transparent.png\" \n style=\"width:70px\"\n >\n </a> \n </td>\n <td style=\"border:none\">\n <h1>Bokeh Tutorial &mdash; Adding Interactions</h1>\n </td>\n </tr>\n</table>", "from bokeh.io import output_notebook, show\n\noutput_notebook()", "Simple Layouts\nIn order to add widgets or have multiple plots that are linked together, you must first be able to create documents that contain these separate objects. It is possible to accomplish this in your own custom templates using bokeh.embed.components. But, Bokeh also provides simple layout capability for grid plots, vplots, and hplots (than can be nested). \nAn example using gridplot is shown below:", "from bokeh.plotting import figure\nfrom bokeh.io import gridplot\n\nx = list(range(11))\ny0, y1, y2 = x, [10-i for i in x], [abs(i-5) for i in x]\n\n# create a new plot\ns1 = figure(width=250, plot_height=250)\ns1.circle(x, y0, size=10, color=\"navy\", alpha=0.5)\n\n# create another one\ns2 = figure(width=250, height=250)\ns2.triangle(x, y1, size=10, color=\"firebrick\", alpha=0.5)\n\n# create and another\ns3 = figure(width=250, height=250)\ns3.square(x, y2, size=10, color=\"olive\", alpha=0.5)\n\n# put all the plots in an HBox\np = gridplot([[s1, s2, s3]], toolbar_location=None)\n\n# show the results\nshow(p)\n\n# EXERCISE: create a gridplot of your own\n\n", "Bokeh also provides the vplot and hplot functions to arrange plot objects in vertical or horizontal layouts.", "# EXERCISE: use vplot to arrange a few plots vertically\n\n", "Linked Interactions\nIt is possible to link various interactions between different Bokeh plots. For instance, the ranges of two (or more) plots can be linked, so that when one of the plots is panned (or zoomed, or otherwise has its range changed) the other plots will update in unison. It is also possible to link selections between two plots, so that when items are selected on one plot, the corresponding items on the second plot also become selected. \nLinked panning\nLinked panning (when mulitple plots have ranges that stay in sync) is simple to spell with Bokeh. You simply share the approrpate range objects between two (or more) plots. The example below shows how to accomplish this by linking the ranges of three plots in various ways:", "plot_options = dict(width=250, plot_height=250, title=None, tools='pan')\n\n# create a new plot\ns1 = figure(**plot_options)\ns1.circle(x, y0, size=10, color=\"navy\")\n\n# create a new plot and share both ranges\ns2 = figure(x_range=s1.x_range, y_range=s1.y_range, **plot_options)\ns2.triangle(x, y1, size=10, color=\"firebrick\")\n\n# create a new plot and share only one range\ns3 = figure(x_range=s1.x_range, **plot_options)\ns3.square(x, y2, size=10, color=\"olive\")\n\np = gridplot([[s1, s2, s3]])\n\n# show the results\nshow(p)\n\n# EXERCISE: create two plots in a gridplot, and link their ranges\n\n", "Linked brushing\nLinking selections is accomplished in a similar way, by sharing data sources between plots. Note that normally with bokeh.plotting and bokeh.charts creating a default data source for simple plots is handled automatically. However to share a data source, we must create them by hand and pass them explicitly. This is illustrated in the example below:", "from bokeh.models import ColumnDataSource\n\nx = list(range(-20, 21))\ny0, y1 = [abs(xx) for xx in x], [xx**2 for xx in x]\n\n# create a column data source for the plots to share\nsource = ColumnDataSource(data=dict(x=x, y0=y0, y1=y1))\n\nTOOLS = \"box_select,lasso_select,help\"\n\n# create a new plot and add a renderer\nleft = figure(tools=TOOLS, width=300, height=300)\nleft.circle('x', 'y0', source=source)\n\n# create another new plot and add a renderer\nright = figure(tools=TOOLS, width=300, height=300)\nright.circle('x', 'y1', source=source)\n\np = gridplot([[left, right]])\n\nshow(p)\n\n# EXERCISE: create two plots in a gridplot, and link their data sources\n\n", "Hover Tools\nBokeh has a Hover Tool that allows additional information to be displayed in a popup whenever the uer howevers over a specific glyph. Basic hover tool configuration amounts to providing a list of (name, format) tuples. The full details can be found in the User's Guide here.\nThe example below shows some basic usage of the Hover tool with a circle glyph:", "from bokeh.models import HoverTool\n\nsource = ColumnDataSource(\n data=dict(\n x=[1, 2, 3, 4, 5],\n y=[2, 5, 8, 2, 7],\n desc=['A', 'b', 'C', 'd', 'E'],\n )\n )\n\nhover = HoverTool(\n tooltips=[\n (\"index\", \"$index\"),\n (\"(x,y)\", \"($x, $y)\"),\n (\"desc\", \"@desc\"),\n ]\n )\n\np = figure(plot_width=300, plot_height=300, tools=[hover], title=\"Mouse over the dots\")\n\np.circle('x', 'y', size=20, source=source)\n\n# Also show custom hover \nfrom utils import get_custom_hover\n\nshow(gridplot([[p, get_custom_hover()]]))", "Ipython Interactors\nIt is possible to use native IPython notebook interactors together with Bokeh. In the interactor update function, the push_notebook method can be used to update a data source (presumably based on the iteractor widget values) to cause a plot to update.\nWarning: The current implementation of push_notebook leaks memory. It is suitable for interactive exploration but not for long-running or streaming use cases. The problem will be resolved in future releases.\nThe example below shows a \"trig function\" exporer using IPython interactors:", "import numpy as np\nfrom bokeh.models import Line\n\nx = np.linspace(0, 2*np.pi, 2000)\ny = np.sin(x)\n\nsource = ColumnDataSource(data=dict(x=x, y=y))\n\np = figure(title=\"simple line example\", plot_height=300, plot_width=600, y_range=(-5, 5))\np.line(x, y, color=\"#2222aa\", alpha=0.5, line_width=2, source=source, name=\"foo\")\n\ndef update(f, w=1, A=1, phi=0):\n if f == \"sin\": func = np.sin\n elif f == \"cos\": func = np.cos\n elif f == \"tan\": func = np.tan\n source.data['y'] = A * func(w * x + phi)\n source.push_notebook()\n\nshow(p)\n\nfrom ipywidgets import interact\ninteract(update, f=[\"sin\", \"cos\", \"tan\"], w=(0,10, 0.1), A=(0,5, 0.1), phi=(0, 10, 0.1))", "Widgets\nBokeh supports direct integration with a small basic widget set. Thse can be used in conjunction with a Bokeh Server, or with CustomJS models to add more interactive capability to your documents. You can see a complete list, with example code in the Adding Widgets section of the User's Guide. \nTo use the widgets, include them in a layout like you would a plot object:", "from bokeh.models.widgets import Slider\nfrom bokeh.io import vform\n\nslider = Slider(start=0, end=10, value=1, step=.1, title=\"foo\")\n\nshow(vform(slider))\n\n# EXERCISE: create and show a Select widget \n", "Callbacks", "from bokeh.models import TapTool, CustomJS, ColumnDataSource\n\ncallback = CustomJS(code=\"alert('hello world')\")\ntap = TapTool(callback=callback)\n\np = figure(plot_width=600, plot_height=300, tools=[tap])\n\np.circle('x', 'y', size=20, source=ColumnDataSource(data=dict(x=[1, 2, 3, 4, 5], y=[2, 5, 8, 2, 7])))\n\nshow(p)", "Lots of places to add callbacks\n\nWidgets - Button, Toggle, Dropdown, TextInput, AutocompleteInput, Select, Multiselect, Slider, (DateRangeSlider), DatePicker,\nTools - TapTool, BoxSelectTool, HoverTool,\nSelection - ColumnDataSource, AjaxDataSource, BlazeDataSource, ServerDataSource\nRanges - Range1d, DataRange1d, FactorRange\n\nCallbacks for widgets\nWidgets that have values associated can have small JavaScript actions attached to them. These actions (also referred to as \"callbacks\") are executed whenever the widget's value is changed. In order to make it easier to refer to specific Bokeh models (e.g., a data source, or a glyhph) from JavaScript, the CustomJS obejct also accepts a dictionary of \"args\" that map names to Python Bokeh models. The corresponding JavaScript models are made available automaticaly to the CustomJS code. \nAnd example below shows an action attached to a slider that updates a data source whenever the slider is moved:", "from bokeh.io import vform\nfrom bokeh.models import CustomJS, ColumnDataSource, Slider\n\nx = [x*0.005 for x in range(0, 200)]\ny = x\n\nsource = ColumnDataSource(data=dict(x=x, y=y))\n\nplot = figure(plot_width=400, plot_height=400)\nplot.line('x', 'y', source=source, line_width=3, line_alpha=0.6)\n\ncallback = CustomJS(args=dict(source=source), code=\"\"\"\n var data = source.get('data');\n var f = cb_obj.get('value')\n x = data['x']\n y = data['y']\n for (i = 0; i < x.length; i++) {\n y[i] = Math.pow(x[i], f)\n }\n source.trigger('change');\n\"\"\")\n\nslider = Slider(start=0.1, end=4, value=1, step=.1, title=\"power\", callback=callback)\n\nlayout = vform(slider, plot)\n\nshow(layout)", "Calbacks for selections\nIt's also possible to make JavaScript actions that execute whenever a user selection (e.g., box, point, lasso) changes. This is done by attaching the same kind of CustomJS object to whatever data source the selection is made on.\nThe example below is a bit more sophisticaed, and demonstrates updating one glyphs data source in response to another glyph's selection:", "from random import random\n\nx = [random() for x in range(500)]\ny = [random() for y in range(500)]\ncolor = [\"navy\"] * len(x)\n\ns = ColumnDataSource(data=dict(x=x, y=y, color=color))\np = figure(plot_width=400, plot_height=400, tools=\"lasso_select\", title=\"Select Here\")\np.circle('x', 'y', color='color', size=8, source=s, alpha=0.4)\n\ns2 = ColumnDataSource(data=dict(ym=[0.5, 0.5]))\np.line(x=[0,1], y='ym', color=\"orange\", line_width=5, alpha=0.6, source=s2)\n\ns.callback = CustomJS(args=dict(s2=s2), code=\"\"\"\n var inds = cb_obj.get('selected')['1d'].indices;\n var d = cb_obj.get('data');\n var ym = 0\n \n if (inds.length == 0) { return; }\n \n for (i = 0; i < d['color'].length; i++) {\n d['color'][i] = \"navy\"\n }\n for (i = 0; i < inds.length; i++) {\n d['color'][inds[i]] = \"firebrick\"\n ym += d['y'][inds[i]]\n }\n \n ym /= inds.length\n s2.get('data')['ym'] = [ym, ym]\n \n cb_obj.trigger('change');\n s2.trigger('change');\n\"\"\")\n\nshow(p)", "More\nFor more interactions, see the User Guide - http://bokeh.pydata.org/en/latest/docs/user_guide/interaction.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google/applied-machine-learning-intensive
content/06_other_models/01_k_means/colab.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/06_other_models/01_k_means/colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 Google LLC.", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "k-means\nk-means clustering is an unsupervised machine learning algorithm that can be used to group items into clusters.\nSo far we have only worked with supervised algorithms. Supervised algorithms have training data with labels that identify the numeric value or class for each item. These algorithms use labeled data to build a model that can be used to make predictions.\nk-means clustering is different. The training data is not labeled. Unlabeled training data is fed into the model, which attempts to find relationships in the data and create clusters based on those relationships. Once these clusters are formed, predictions can be made about which cluster new data items belong to.\nThe clusters can't easily be labeled in many cases. The clusters are \"emergent clusters\" and are created by the algorithm. They don't always map to groupings that you might expect.\nExample: Groups of Mushrooms\nLet's start by looking at a real world use case involving mushrooms. The University of California Irvine has a dataset containing various attributes of mushrooms. One of those attributes is the edibility of the mushroom: Is it edible or is it poisonous? We want to see if we can find clusters of mushroom attributes that can be used to determine if a mushroom is edible or not.\nLoad the Data\nFor this example we'll load the mushroom classification data. The dataset attributes about over 8,000 different mushrooms.\nUpload your kaggle.json file and run the code block below.", "! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'", "And then use the Kaggle API to download the dataset.", "! kaggle datasets download uciml/mushroom-classification\n! ls", "Unzip the Data.", "! unzip mushroom-classification.zip\n! ls", "And finally, load the training data into a DataFrame.", "import pandas as pd\n\ndata = pd.read_csv('mushrooms.csv')\ndata.sample(n=10)", "Exploratory Data Analysis\nLet's take a closer look at the data that we'll be working with, starting with a simple describe.", "data.describe(include='all')", "It doesn't look like any columns are missing data since we see counts of 8,124 for every column.\nIt does look like all of the data is categorical. We'll need to convert it into numeric values for the model to work. Let's do it for every column except class. We aren't trying to predict class, but we do want to see if we can get pure clusters of one type of class. So we don't want it included in our training data. Also, it is the only feature that isn't observable without having dire consequences!", "columns = [c for c in data.columns.values if c != 'class']\nid_to_value_mappings = {}\nvalue_to_id_mappings = {}\n\nfor column in columns:\n i_to_v = sorted(data[column].unique())\n v_to_i = { v:i for i, v in enumerate(i_to_v)}\n \n\n numeric_column = column + '-id'\n data[numeric_column] = [v_to_i[v] for v in data[column]]\n\n value_to_id_mappings[column] = v_to_i\n id_to_value_mappings[numeric_column] = i_to_v\n\nnumeric_columns = id_to_value_mappings.keys()\ndata[numeric_columns].describe()", "Perform Clustering\nWe now have numeric data that a model can handle. To run k-means clustering on the data, we simply load k-means from scikit-learn and ask the model to find a specific number of clusters for us.\nNotice that we are scaling the data. The class IDs are integer values, and some columns have many more classes than others. Scaling helps make sure that columns with more classes don't have an undue influence on the model.", "from sklearn.cluster import KMeans\nfrom sklearn.preprocessing import scale\n\nmodel = KMeans(n_clusters=10)\nmodel.fit(scale(data[numeric_columns]))\n\nprint(model.inertia_)", "We asked scikit-learn to create 10 clusters for us, and then we printed out the inertia_ for the resultant clusters. Inertia is the sum of the squared distances of samples to their closest cluster center. Typically, the smaller the inertia the better.\nBut why did we choose 10 clusters? And is the inertia that we received reasonable?\nFind the Optimal Number of Clusters\nWith just one run of the algorithm, it is difficult to tell how many clusters we should have and what an appropriate inertia value is. k-means is trying to discover things about your data that you do not know. Picking a number of clusters at random isn't the best way to use k-means.\nInstead, you should experiment with a few different cluster values and measure the inertia of each. As you increase the number of clusters, your inertia should decrease.", "from sklearn.cluster import KMeans\nfrom sklearn.preprocessing import scale\nimport matplotlib.pyplot as plt\n\nclusters = list(range(5, 50, 5))\ninertias = []\n\nscaled_data = scale(data[numeric_columns])\n\nfor c in clusters:\n model = KMeans(n_clusters=c)\n model = model.fit(scaled_data)\n inertias.append(model.inertia_)\n\nplt.plot(clusters, inertias)\nplt.show()", "The resulting graph should start high and to the left and curve down as the number of clusters grows. The initial slope is steep, but begins to level off. Your optimal number of clusters is somewhere in the [\"elbow\" of the graph](https://en.wikipedia.org/wiki/Elbow_method_(clustering) as the slope levels.\nOnce you have this number, you need to then check to see if the number is reasonable for your use case. Say that the 'optimal' number of clusters for our mushroom identification is 15. Is that a reasonable number of clusters to deal with? If we have too many, we can overfit and make the model poor at generalizing. And what are the purposes of the clusters? If you are clustering mushrooms and want to find clusters that are definitely safe to eat, 15 or more clusters might be perfectly fine. If you are clustering customers for different advertising campaigns, 15 different campaigns might be more than your marketing department can handle.\nClustering the data is often just the start of your journey. Once you have clusters, you'll need to look at each group and try to determine what makes them similar. What patterns did the clustering find? And will that clustering be useful to you?\nExamining Clusters\nLet's say that 15 is a reasonable number of clusters. We can rebuild the model using that setting.", "from sklearn.cluster import KMeans\nfrom sklearn.preprocessing import scale\n\nmodel = KMeans(n_clusters=15)\nmodel.fit(scale(data[numeric_columns]))\n\nprint(model.inertia_)", "Now let's see if we have any 'pure' clusters. These are clusters with all-edible or all-poisonous mushrooms.", "import numpy as np\n\nfor cluster in sorted(np.unique(model.labels_)):\n num_edible = np.sum(data[model.labels_ == cluster]['class'] == 'e')\n total = np.sum(model.labels_ == cluster)\n print(cluster, num_edible / total)", "In our model we had clusters 0, 1, 6, and 10 be 100% edible. Clusters 2, 4, 7, and 12 were all poisonous. The remaining were a mix of the two.\nKnowing this, let's look at one of the all-edible clusters and see what attributes we could look for to have confidence that we have an edible mushroom.", "edible = data[model.labels_ == 1]\n\nfor column in edible.columns:\n if column.endswith('-id'):\n continue\n print(column, edible[column].unique())\n", "The mapping of the letter codes to more descriptive text can be found in the dataset description.\nExample: Classification of Digits\nClustering for data exploration purposes can lead to interesting insights in to your data, but clustering can also be used for classification purposes.\nIn the example below, we'll try to use k-means clustering to predict handwritten digits.\nLoad the Data\nWe'll load the digits dataset packaged with scikit-learn.", "from sklearn.datasets import load_digits\n\ndigits = load_digits()", "Scale the Data\nIt is good practice to scale the data to ensure that outliers don't have too big of an impact on the clustering.", "from sklearn.preprocessing import scale\n\nscaled_digits = scale(digits.data)", "Fit a Model\nWe can then create a k-means model with 10 clusters. (We know there are 10 digits from 0 through 9.)", "from sklearn.cluster import KMeans\n\nmodel = KMeans(n_clusters=10)\nmodel = model.fit(scaled_digits)", "Make Predictions\nWe can then use the model to predict which category a data point belongs to.\nIn the case below, we'll just use some of the data that we trained with for illustrative purposes. The prediction will provide a numeric value.", "cluster = model.predict([scaled_digits[0]])[0]\n\ncluster", "What is this value? Is it the predicted digit?\nNo. This number is the cluster that the model thinks the digit belongs to. To determine the predicted digit, we'll need to see what other digits are in the cluster and choose the most popular one for our classification.", "import numpy as np\n\nlabels = digits.target\n\ncluster_to_digit = [\n np.argmax(\n np.bincount(\n np.array(\n [labels[i] for i in range(len(model.labels_)) if model.labels_[i] == cluster]\n )\n )\n ) for cluster in range(10)\n]\n\ncluster_to_digit", "Here we can see the digit that each cluster represents.\nMeasure Model Quality\nIf we do have labeled data, as is the case with our digits data, then we can measure the quality of our model using the homogeneity score and the completeness score.", "from sklearn.metrics import homogeneity_score\nfrom sklearn.metrics import completeness_score\n\nhomogeneity = homogeneity_score(labels, model.labels_)\ncompleteness = completeness_score(labels, model.labels_)\nhomogeneity, completeness", "Exercises\nExercise 1\nLoad the iris dataset, create a k-means model with three clusters, and then find the homogeneity and completeness scores for the model. \nStudent Solution", "# Your code goes here", "Exercise 2\nLoad the iris dataset, and then create a k-means model with three clusters using only two features. (Try to find the best two features for clustering.) Create a plot of the two features.\nFor each datapoint in the chart, use a marker to encode the actual/correct species. For instance, use a triangle for Setosa, a square for Versicolour, and a circle for Virginica. Color each marker green if the predicted class matches the actual. Color each marker red if the classes don't match.\nStudent Solution", "# Your code goes here", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BrownDwarf/ApJdataFrames
notebooks/Patten2006.ipynb
mit
[ "ApJdataFrames 007: Patten2006\nTitle: Spitzer IRAC Photometry of M, L, and T Dwarfs\nAuthors: Brian M Patten, John R Stauffer, Adam S Burrows, Massimo Marengo, Joseph L Hora, Kevin L Luhman, Sarah M Sonnett, Todd J Henry, Deepak Raghavan, S Thomas Megeath, James Liebert, and Giovanni G Fazio \nData is from this paper:", "%pylab inline\nimport seaborn as sns\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport pandas as pd", "The tables define the value and error as a string:\nval (err) \nwhich is a pain in the ass because now I have to parse the strings, which always takes much longer than it should because data wrangling is hard sometimes.\nI define a function that takes a column name and a data frame and strips the output.", "def strip_parentheses(col, df):\n '''\n splits single column strings of \"value (error)\" into two columns of value and error\n \n input:\n -string name of column to split in two\n -dataframe to apply to\n \n returns dataframe\n '''\n \n out1 = df[col].str.replace(\")\",\"\").str.split(pat=\"(\")\n df_out = out1.apply(pd.Series)\n \n # Split the string on the whitespace \n base, sufx = col.split(\" \")\n df[base] = df_out[0].copy()\n df[base+\"_e\"] = df_out[1].copy()\n del df[col]\n \n return df\n ", "Table 1 - Basic data on sources", "names = [\"Name\",\"R.A. (J2000.0)\",\"Decl. (J2000.0)\",\"Spectral Type\",\"SpectralType Ref.\",\"Parallax (error)(arcsec)\",\n \"Parallax Ref.\",\"J (error)\",\"H (error)\",\"Ks (error)\",\"JHKRef.\",\"PhotSys\"]\n\ntbl1 = pd.read_csv(\"http://iopscience.iop.org/0004-637X/651/1/502/fulltext/64991.tb1.txt\", \n sep='\\t', names=names, na_values='\\ldots')\n\ncols_to_fix = [col for col in tbl1.columns.values if \"(error)\" in col]\nfor col in cols_to_fix:\n print col\n tbl1 = strip_parentheses(col, tbl1)\n\ntbl1.head()", "Table 3- IRAC photometry", "names = [\"Name\",\"Spectral Type\",\"[3.6] (error)\",\"n1\",\"[4.5] (error)\",\"n2\",\n \"[5.8] (error)\",\"n3\",\"[8.0] (error)\",\"n4\",\"[3.6]-[4.5]\",\"[4.5]-[5.8]\",\"[5.8]-[8.0]\",\"Notes\"]\n\ntbl3 = pd.read_csv(\"http://iopscience.iop.org/0004-637X/651/1/502/fulltext/64991.tb3.txt\", \n sep='\\t', names=names, na_values='\\ldots')\n\ncols_to_fix = [col for col in tbl3.columns.values if \"(error)\" in col]\ncols_to_fix\nfor col in cols_to_fix:\n print col\n tbl3 = strip_parentheses(col, tbl3)\n\ntbl3.head()\n\npd.options.display.max_columns = 50\n\ndel tbl3[\"Spectral Type\"] #This is repeated\n\npatten2006 = pd.merge(tbl1, tbl3, how=\"outer\", on=\"Name\")\npatten2006.head()", "Convert spectral type to number", "import gully_custom\n\npatten2006[\"SpT_num\"], _1, _2, _3= gully_custom.specTypePlus(patten2006[\"Spectral Type\"])", "Make a plot of mid-IR colors as a function of spectral type.", "sns.set_context(\"notebook\", font_scale=1.5)\n\nfor color in [\"[3.6]-[4.5]\", \"[4.5]-[5.8]\", \"[5.8]-[8.0]\"]:\n plt.plot(patten2006[\"SpT_num\"], patten2006[color], '.', label=color)\n \nplt.xlabel(r'Spectral Type (M0 = 0)')\nplt.ylabel(r'$[3.6]-[4.5]$')\nplt.title(\"IRAC colors as a function of spectral type\")\nplt.legend(loc='best')", "Save the cleaned data.", "patten2006.to_csv('../data/Patten2006/patten2006.csv', index=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
noammor/coursera-machinelearning-python
ex7/ml-ex7-pca.ipynb
mit
[ "# Exercise 7 | Principle Component Analysis", "import numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.io\nimport itertools\n\n%matplotlib inline", "Part 1: Load Example Dataset\nWe start this exercise by using a small dataset that is easily to\n visualize.", "ex7data1 = scipy.io.loadmat('ex7data1.mat')\nX = ex7data1['X']\n\ndef plot_data(X, ax):\n ax.plot(X[:,0], X[:,1], 'bo')\n \nfig, ax = plt.subplots()\nplot_data(X, ax)\n\ndef normalize_features(X):\n mu = np.mean(X, 0)\n X_norm = X - mu\n sigma = np.std(X_norm, 0)\n X_norm = X_norm / sigma\n return X_norm, mu, sigma\n\nX_norm, mu, sigma = normalize_features(X)", "Part 2: Principal Component Analysis\nYou should now implement PCA, a dimension reduction technique. You\n should complete the following code.", "def pca(X):\n #PCA Run principal component analysis on the dataset X\n # [U, S] = pca(X) computes eigenvectors of the covariance matrix of X\n # Returns the eigenvectors U, the eigenvalues in S\n #\n\n m, n = X.shape \n \n # You need to return the following variables correctly.\n U = np.zeros((n, n))\n S = np.zeros(n)\n \n # ====================== YOUR CODE HERE ======================\n # Instructions: You should first compute the covariance matrix. Then, you\n # should use the \"scipy.linalg.svd\" function to compute the eigenvectors\n # and eigenvalues of the covariance matrix. \n #\n # Note: When computing the covariance matrix, remember to divide by m (the\n # number of examples).\n #\n\n \n \n # =========================================================================\n \n return U, S\n\nU, S = pca(X_norm)\nU", "Draw the eigenvectors centered at mean of data. These lines show the\n directions of maximum variations in the dataset.", "def draw_line(a, b, ax, *args):\n ax.plot([a[0], b[0]], [a[1], b[1]], *args)\n\nfig, ax = plt.subplots(figsize=(5,5))\nax.set_ylim(2, 8)\nax.set_xlim(0.5, 6.5)\nax.set_aspect('equal')\nplot_data(X, ax)\nax.plot(mu[0], mu[1])\ndraw_line(mu, mu + 1.5 * S[0] * U[0, :], ax, '-k')\ndraw_line(mu, mu + 1.5 * S[1] * U[1, :], ax, '-k')", "The top eigenvector should be [-0.707107, -0.707107].", "U[0]", "Part 3: Dimension Reduction\nYou should now implement the projection step to map the data onto the \n first k eigenvectors. The code will then plot the data in this reduced \n dimensional space. This will show you what the data looks like when \n using only the corresponding eigenvectors to reconstruct it.\nYou should complete the code in project_data.", "def project_data(X, U, K):\n #PROJECTDATA Computes the reduced data representation when projecting only \n #on to the top k eigenvectors\n # Z = projectData(X, U, K) computes the projection of \n # the normalized inputs X into the reduced dimensional space spanned by\n # the first K columns of U. It returns the projected examples in Z.\n #\n # You need to return the following variables correctly.\n Z = np.zeros((X.shape[0], K))\n \n # ====================== YOUR CODE HERE ======================\n # Instructions: Compute the projection of the data using only the top K \n # eigenvectors in U (first K columns). \n # For the i-th example X(i,:), the projection on to the k-th \n # eigenvector is given as follows:\n # x = X[i, :].T\n # projection_k = x.T.dot(U(:, k));\n #\n \n \n \n \n \n \n # =============================================================\n \n return Z", "Projection of the first example: (should be about 1.49631261)", "K = 1\nZ = project_data(X_norm, U, K)\nZ[0,0]\n\ndef recover_data(Z, U, K):\n #RECOVERDATA Recovers an approximation of the original data when using the \n #projected data\n # X_rec = RECOVERDATA(Z, U, K) recovers an approximation the \n # original data that has been reduced to K dimensions. It returns the\n # approximate reconstruction in X_rec.\n #\n \n # You need to return the following variables correctly.\n X_rec = np.zeros((Z.shape[0], U.shape[0]))\n \n \n # ====================== YOUR CODE HERE ======================\n # Instructions: Compute the approximation of the data by projecting back\n # onto the original space using the top K eigenvectors in U.\n #\n # For the i-th example Z(i,:), the (approximate)\n # recovered data for dimension j is given as follows:\n # v = Z(i, :)';\n # recovered_j = v' * U(j, 1:K)';\n #\n # Notice that U(j, 1:K) is a row vector.\n # \n \n \n \n \n \n # =============================================================\n \n return X_rec", "Approximation of the first example: (should be about [-1.05805279, -1.05805279])", "X_rec = recover_data(Z, U, K)\nX_rec[0]", "Draw lines connecting the projected points to the original points", "fig, ax = plt.subplots(figsize=(5,5))\nax.set_ylim(-3, 3)\nax.set_xlim(-3, 3)\nax.set_aspect('equal')\nplot_data(X_norm, ax)\nax.plot(X_rec[:,0], X_rec[:,1], 'ro')\n\nfor x_norm, x_rec in zip(X_norm, X_rec):\n draw_line(x_norm, x_rec, ax, '--k')", "Part 4: Loading and Visualizing Face Data\nWe start the exercise by first loading and visualizing the dataset.\n The following code will load the dataset into your environment, and later display the first 100 faces in the dataset.", "X = scipy.io.loadmat('ex7faces.mat')['X']\nX.shape\n\ndef display_faces(X, example_width=None):\n example_size = len(X[0])\n if example_width is None:\n example_width = int(np.sqrt(example_size))\n num_examples = len(X)\n figures_row_length = int(np.sqrt(num_examples))\n \n fig, axes = plt.subplots(nrows=figures_row_length, ncols=figures_row_length, figsize=(6,6))\n fig.subplots_adjust(wspace=0, hspace=0)\n for i, j in itertools.product(range(figures_row_length), range(figures_row_length)):\n ax = axes[i][j]\n ax.set_axis_off()\n ax.set_aspect('equal')\n example = X[i*figures_row_length + j].reshape(example_size//example_width, example_width).T\n ax.imshow(example, cmap='Greys_r')\n \ndisplay_faces(X[:100])", "Part 5: PCA on Face Data: Eigenfaces\nRun PCA and visualize the eigenvectors which are in this case eigenfaces\n We display the first 64 eigenfaces.\nBefore running PCA, it is important to first normalize X.", "X_norm, mu, sigma = normalize_features(X)\nU, S = pca(X_norm)\ndisplay_faces(U[:, :64].T)", "Part 6: Dimension Reduction for Faces\nProject images to the eigen space using the top k eigenvectors", "K = 100\nZ = project_data(X_norm, U, K)\nZ.shape", "Part 7: Visualization of Faces after PCA Dimension Reduction\nProject images to the eigen space using the top K eigen vectors and \n visualize only using those K dimensions.\n Compare to the original input.", "X_rec = recover_data(Z, U, K)\nX_rec.shape\n\ndisplay_faces(X_rec[:100])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.12/_downloads/plot_eeg_erp.ipynb
bsd-3-clause
[ "%matplotlib inline", ".. _tut_erp:\nEEG processing and Event Related Potentials (ERPs)\nFor a generic introduction to the computation of ERP and ERF\nsee :ref:tut_epoching_and_averaging. Here we cover the specifics\nof EEG, namely:\n - setting the reference\n - using standard montages :func:mne.channels.Montage\n - Evoked arithmetic (e.g. differences)", "import mne\nfrom mne.datasets import sample", "Setup for reading the raw data", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nraw = mne.io.read_raw_fif(raw_fname, add_eeg_ref=True, preload=True)", "Let's restrict the data to the EEG channels", "raw.pick_types(meg=False, eeg=True, eog=True)", "By looking at the measurement info you will see that we have now\n59 EEG channels and 1 EOG channel", "print(raw.info)", "In practice it's quite common to have some EEG channels that are actually\nEOG channels. To change a channel type you can use the\n:func:mne.io.Raw.set_channel_types method. For example\nto treat an EOG channel as EEG you can change its type using", "raw.set_channel_types(mapping={'EOG 061': 'eeg'})\nprint(raw.info)", "And to change the nameo of the EOG channel", "raw.rename_channels(mapping={'EOG 061': 'EOG'})", "Let's reset the EOG channel back to EOG type.", "raw.set_channel_types(mapping={'EOG': 'eog'})", "The EEG channels in the sample dataset already have locations.\nThese locations are available in the 'loc' of each channel description.\nFor the first channel we get", "print(raw.info['chs'][0]['loc'])", "And it's actually possible to plot the channel locations using\nthe :func:mne.io.Raw.plot_sensors method", "raw.plot_sensors()\nraw.plot_sensors('3d') # in 3D", "Setting EEG montage\nIn the case where your data don't have locations you can set them\nusing a :func:mne.channels.Montage. MNE comes with a set of default\nmontages. To read one of them do:", "montage = mne.channels.read_montage('standard_1020')\nprint(montage)", "To apply a montage on your data use the :func:mne.io.set_montage\nfunction. Here don't actually call this function as our demo dataset\nalready contains good EEG channel locations.\nNext we'll explore the definition of the reference.\nSetting EEG reference\nLet's first remove the reference from our Raw object.\nThis explicitly prevents MNE from adding a default EEG average reference\nrequired for source localization.", "raw_no_ref, _ = mne.io.set_eeg_reference(raw, [])", "We next define Epochs and compute an ERP for the left auditory condition.", "reject = dict(eeg=180e-6, eog=150e-6)\nevent_id, tmin, tmax = {'left/auditory': 1}, -0.2, 0.5\nevents = mne.read_events(event_fname)\nepochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax,\n reject=reject)\n\nevoked_no_ref = mne.Epochs(raw_no_ref, **epochs_params).average()\ndel raw_no_ref # save memory\n\ntitle = 'EEG Original reference'\nevoked_no_ref.plot(titles=dict(eeg=title))\nevoked_no_ref.plot_topomap(times=[0.1], size=3., title=title)", "Average reference: This is normally added by default, but can also\nbe added explicitly.", "raw_car, _ = mne.io.set_eeg_reference(raw)\nevoked_car = mne.Epochs(raw_car, **epochs_params).average()\ndel raw_car # save memory\n\ntitle = 'EEG Average reference'\nevoked_car.plot(titles=dict(eeg=title))\nevoked_car.plot_topomap(times=[0.1], size=3., title=title)", "Custom reference: Use the mean of channels EEG 001 and EEG 002 as\na reference", "raw_custom, _ = mne.io.set_eeg_reference(raw, ['EEG 001', 'EEG 002'])\nevoked_custom = mne.Epochs(raw_custom, **epochs_params).average()\ndel raw_custom # save memory\n\ntitle = 'EEG Custom reference'\nevoked_custom.plot(titles=dict(eeg=title))\nevoked_custom.plot_topomap(times=[0.1], size=3., title=title)", "Evoked arithmetics\nTrial subsets from Epochs can be selected using 'tags' separated by '/'.\nEvoked objects support basic arithmetic.\nFirst, we create an Epochs object containing 4 conditions.", "event_id = {'left/auditory': 1, 'right/auditory': 2,\n 'left/visual': 3, 'right/visual': 4}\nepochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax,\n reject=reject)\nepochs = mne.Epochs(raw, **epochs_params)\n\nprint(epochs)", "Next, we create averages of stimulation-left vs stimulation-right trials.\nWe can use basic arithmetic to, for example, construct and plot\ndifference ERPs.", "left, right = epochs[\"left\"].average(), epochs[\"right\"].average()\n\n(left - right).plot_joint() # create and plot difference ERP", "Note that by default, this is a trial-weighted average. If you have\nimbalanced trial numbers, consider either equalizing the number of events per\ncondition (using Epochs.equalize_event_counts), or the combine_evoked\nfunction.\nAs an example, first, we create individual ERPs for each condition.", "aud_l = epochs[\"auditory\", \"left\"].average()\naud_r = epochs[\"auditory\", \"right\"].average()\nvis_l = epochs[\"visual\", \"left\"].average()\nvis_r = epochs[\"visual\", \"right\"].average()\n\nall_evokeds = [aud_l, aud_r, vis_l, vis_r]\n\n# This could have been much simplified with a list comprehension:\n# all_evokeds = [epochs[cond] for cond in event_id]\n\n# Then, we construct and plot an unweighted average of left vs. right trials.\nmne.combine_evoked(all_evokeds, weights=(1, -1, 1, -1)).plot_joint()", "Often, it makes sense to store Evoked objects in a dictionary or a list -\neither different conditions, or different subjects.", "# If they are stored in a list, they can be easily averaged, for example,\n# for a grand average across subjects (or conditions).\ngrand_average = mne.grand_average(all_evokeds)\nmne.write_evokeds('/tmp/tmp-ave.fif', all_evokeds)\n\n# If Evokeds objects are stored in a dictionary, they can be retrieved by name.\nall_evokeds = dict((cond, epochs[cond].average()) for cond in event_id)\nprint(all_evokeds['left/auditory'])\n\n# Besides for explicit access, this can be used for example to set titles.\nfor cond in all_evokeds:\n all_evokeds[cond].plot_joint(title=cond)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mitdbg/modeldb
client/workflows/examples/text_classification_spacy.ipynb
mit
[ "Text Classification with spaCy\nThis walkthrough is based on this spaCy tutorial.\nTrain a convolutional neural network text classifier on the\nIMDB dataset, using the TextCategorizer component. The dataset will be loaded\nautomatically via Thinc's built-in dataset loader. The model is added to\nspacy.pipeline, and predictions are available via doc.cats.\nSet Up Environment\nThis notebook has been tested with the following package versions:\n(you may need to change pip to pip3, depending on your own Python environment)", "# Python >3.5\n!pip install verta\n!pip install spacy==2.1.6\n!python -m spacy download en", "Set Up Verta", "HOST = 'app.verta.ai'\n\nPROJECT_NAME = 'Film Review Classification'\nEXPERIMENT_NAME = 'spaCy CNN'\n\n# import os\n# os.environ['VERTA_EMAIL'] = \n# os.environ['VERTA_DEV_KEY'] = \n\nfrom verta import Client\nfrom verta.utils import ModelAPI\n\nclient = Client(HOST, use_git=False)\n\nproj = client.set_project(PROJECT_NAME)\nexpt = client.set_experiment(EXPERIMENT_NAME)\nrun = client.set_experiment_run()", "Imports", "from __future__ import print_function\n\nimport warnings\nwarnings.filterwarnings(\"ignore\", category=FutureWarning)\n\nimport random\n\nimport six\n\nimport numpy as np\nimport thinc.extra.datasets\nimport spacy\nfrom spacy.util import minibatch, compounding", "Helper Functions", "def load_data(limit=0, split=0.8):\n \"\"\"Load data from the IMDB dataset.\"\"\"\n # Partition off part of the dataset to train and test\n train_data, _ = thinc.extra.datasets.imdb()\n random.shuffle(train_data) \n train_data = train_data[-limit:]\n texts, labels = zip(*train_data)\n cats = [{\"POSITIVE\": bool(y), \"NEGATIVE\": not bool(y)} for y in labels]\n split = int(len(train_data) * split)\n return (texts[:split], cats[:split]), (texts[split:], cats[split:])\n\ndef evaluate(tokenizer, textcat, texts, cats):\n \"\"\"Evaluate with text data, calculates precision, recall and f score\"\"\"\n docs = (tokenizer(text) for text in texts)\n tp = 0.0 # True positives\n fp = 1e-8 # False positives\n fn = 1e-8 # False negatives\n tn = 0.0 # True negatives\n for i, doc in enumerate(textcat.pipe(docs)):\n gold = cats[i]\n for label, score in doc.cats.items():\n if label not in gold:\n continue\n if label == \"NEGATIVE\":\n continue\n if score >= 0.5 and gold[label] >= 0.5:\n tp += 1.0\n elif score >= 0.5 and gold[label] < 0.5:\n fp += 1.0\n elif score < 0.5 and gold[label] < 0.5:\n tn += 1\n elif score < 0.5 and gold[label] >= 0.5:\n fn += 1\n precision = tp / (tp + fp)\n recall = tp / (tp + fn)\n if (precision + recall) == 0:\n f_score = 0.0\n else:\n f_score = 2 * (precision * recall) / (precision + recall)\n return {\"textcat_p\": precision, \"textcat_r\": recall, \"textcat_f\": f_score}", "Train Model", "hyperparams = {\n 'model':'en',\n 'n_iter': 2, # epochs\n 'n_texts': 500, # num of training samples\n 'architecture': 'simple_cnn',\n 'num_samples': 1000,\n 'train_test_split': 0.8,\n 'dropout': 0.2\n}\nrun.log_hyperparameters(hyperparams)\n\n# using the basic en model\ntry:\n nlp = spacy.load(hyperparams['model']) # load existing spaCy model\nexcept OSError:\n nlp = spacy.blank(hyperparams['model']) # create blank Language class\n print(\"Created blank '{}' model\".format(hyperparams['model']))\nelse:\n print(\"Loaded model '{}'\".format(nlp))\n\n# add the text classifier to the pipeline if it doesn't exist\nif \"textcat\" not in nlp.pipe_names:\n textcat = nlp.create_pipe(\n \"textcat\",\n config={\n \"exclusive_classes\": True,\n \"architecture\": hyperparams['architecture'],\n }\n )\n nlp.add_pipe(textcat, last=True)\n# otherwise, get it, so we can add labels to it\nelse:\n textcat = nlp.get_pipe(\"textcat\")\n\n# add label to text classifier\n_= textcat.add_label(\"POSITIVE\")\n_= textcat.add_label(\"NEGATIVE\")\n\n# load the IMDB dataset\nprint(\"Loading IMDB data...\")\n(train_texts, train_cats), (dev_texts, dev_cats) = load_data(limit=hyperparams['num_samples'],\n split=hyperparams['train_test_split'])\nprint(\n \"Using {} examples ({} training, {} evaluation)\".format(\n hyperparams['num_samples'], len(train_texts), len(dev_texts)\n )\n)\ntrain_data = list(zip(train_texts, [{\"cats\": cats} for cats in train_cats]))\n\n# sample train data\ntrain_data[:1]\n\n# get names of other pipes to disable them during training\nother_pipes = [pipe for pipe in nlp.pipe_names if pipe != \"textcat\"]\nprint(\"other pipes:\", other_pipes)\nwith nlp.disable_pipes(*other_pipes): # only train textcat\n optimizer = nlp.begin_training()\n print(\"Training the model...\")\n print(\"{:^5}\\t{:^5}\\t{:^5}\\t{:^5}\".format(\"LOSS\", \"P\", \"R\", \"F\"))\n batch_sizes = compounding(4.0, 32.0, 1.001)\n for i in range(hyperparams['n_iter']):\n losses = {}\n # batch up the examples using spaCy's minibatch\n random.shuffle(train_data)\n batches = minibatch(train_data, size=batch_sizes)\n for batch in batches:\n texts, annotations = zip(*batch)\n nlp.update(texts, annotations, sgd=optimizer, drop=hyperparams['dropout'], losses=losses)\n with textcat.model.use_params(optimizer.averages):\n # evaluate on the dev data split off in load_data()\n scores = evaluate(nlp.tokenizer, textcat, dev_texts, dev_cats)\n print(\n \"{0:.3f}\\t{1:.3f}\\t{2:.3f}\\t{3:.3f}\".format( # print a simple table\n losses[\"textcat\"],\n scores[\"textcat_p\"],\n scores[\"textcat_r\"],\n scores[\"textcat_f\"],\n ) \n )\n run.log_observation('loss', losses['textcat'])\n run.log_observation('precision', scores['textcat_p'])\n run.log_observation('recall', scores['textcat_r'])\n run.log_observation('f_score', scores['textcat_f'])", "Log for Deployment\nCreate Wrapper Class\nVerta deployment expects a particular interface for its models.\nThey must expose a predict() function, so we'll create a thin wrapper class around our spaCy pipeline.", "class TextClassifier:\n def __init__(self, nlp):\n self.nlp = nlp\n\n def predict(self, input_list): # param must be a list/batch of inputs\n predictions = []\n for text in input_list:\n scores = self.nlp(text).cats\n if scores['POSITIVE'] > scores['NEGATIVE']:\n predictions.append(\"POSITIVE\")\n else:\n predictions.append(\"NEGATIVE\")\n \n return np.array(predictions) # response currently must be a NumPy array\n\ninput_list = [\n \"This movie was subpar at best.\",\n \"Plot didn't make sense.\"\n]\n\nmodel = TextClassifier(nlp)\nmodel.predict(input_list)", "Create Deployment Artifacts\nVerta deployment also needs a couple more details about the model.\nWhat do its inputs and outputs look like?", "from verta.utils import ModelAPI # Verta-provided utility class\n\nmodel_api = ModelAPI(\n input_list, # example inputs\n model.predict(input_list), # example outputs\n)", "What PyPI-installable packages (with version numbers) are required to deserialize and run the model?", "requirements = [\"numpy\", \"spacy\", \"thinc\"]\n\n# this could also have been a path to a requirements.txt file on disk\nrun.log_requirements(requirements)", "Log Model", "# test the trained model\ntest_text = 'The Lion King was very entertaining. The movie was visually spectacular.'\ndoc = nlp(test_text)\nprint(test_text)\nprint(doc.cats)\n\nrun.log_model(\n model,\n model_api=model_api,\n)", "Deployment", "run", "Click the link above to view your Experiment Run in the Verta Web App, and deploy it.\nOnce it's ready, you can make predictions against the deployed model.", "from verta._demo_utils import DeployedModel\n\ndeployed_model = DeployedModel(HOST, run.id)\n\ndeployed_model.predict([\"I would definitely watch this again!\"])", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GustavoRP/IA369Z
dev/.ipynb_checkpoints/DTI_open_01-05-17_GRP-checkpoint.ipynb
gpl-3.0
[ "Openig DTI data\nThis JUPYTER notebook has a demonstration of how to open DTI in nifti format.\nimporting modules", "# import modules and libs\nimport io, os, sys, types\nimport numpy as np\n\n# image and graphic\nfrom IPython.display import Image\nfrom IPython.display import display\nimport matplotlib.pyplot as plt\n%matplotlib\n\n#import notebook as module\nsys.path.append('C:/iPython/DTIlib')\nimport DTIlib as DTI", "Loading the data\nIn this folder there are 83 subjects data in nifti format (.nii).\nThe dada for each subject is a volume of (70x256x256) voxels and composed of 3 eigenvalues, 3 eigenvectors, and FA volume.", "#subjects folder\nBASE_PATH = 'G:/DTI_DS/original'\n\n#subject number\n# subject_number = 84 # very inclined subject\nsubject_number = 1\n\nif(subject_number < 10):\n subject_dir = str(BASE_PATH)+str('/subject00')+str(subject_number)\nelse:\n subject_dir = str(BASE_PATH)+str('/subject0')+str(subject_number)\n \n\n#load DTI\nFA, evl, evt = DTI.load_fa_evl_evt(subject_dir)\nMD = DTI.Mean_Difusivity(evl)\n\n#print shapes\nprint('evt.shape =', evt.shape)\nprint('evl.shape =', evl.shape)\nprint('FA.shape =', FA.shape)\nprint('MD.shape =', MD.shape)", "Data visualization\nFA\nMD is a 3D scalar map that shows the difusion assimetry for each voxel, so each one is associated with an intensity value.\nInline image of FA in three different viels (Axial, coronal, and sagittal viels).", "# Show FA\n%matplotlib inline\n# %matplotlib notebook\nfrom matplotlib.widgets import Slider\n\nsz, sy, sx = FA.shape\n# set up figure\nfig = plt.figure(figsize=(15,15))\nxy = fig.add_subplot(1,3,1)\nplt.title(\"Axial Slice\")\nxz = fig.add_subplot(1,3,2)\nplt.title(\"Coronal Slice\")\nyz = fig.add_subplot(1,3,3)\nplt.title(\"Sagittal Slice\")\n\nframe = 0.5\nmaximo = np.max(np.abs(FA)) # normalize the FA values for better visualization\nminimo = np.min(np.abs(FA))\nxy.imshow(FA[np.floor(frame*sz),:,:], origin='lower', interpolation='nearest', cmap=\"gray\",vmin=0, vmax=maximo )\nxz.imshow(FA[:,np.floor(frame*sy),:], origin='lower', interpolation='nearest', cmap=\"gray\",vmin=0 , vmax=maximo )\nyz.imshow(FA[:,:,np.floor(frame*sx)], origin='lower', interpolation='nearest', cmap=\"gray\",vmin=0 , vmax=maximo )", "MD\nMD is a 3D scalar map that shows the mean difusion for each voxel, so each one is associated with an intensity value.\nInline image of slices of the MD in three different viels (Axial, coronal, and sagittal viels).", "# Show MD\n%matplotlib inline\n# %matplotlib notebook\nfrom matplotlib.widgets import Slider\n\nsz, sy, sx = MD.shape\n# set up figure\nfig = plt.figure(figsize=(15,15))\nxy = fig.add_subplot(1,3,1)\nplt.title(\"Axial Slice\")\nxz = fig.add_subplot(1,3,2)\nplt.title(\"Coronal Slice\")\nyz = fig.add_subplot(1,3,3)\nplt.title(\"Sagittal Slice\")\n\nframe = 0.5\nmaximo = np.max(np.abs(MD)) # normalize the MD values for better visualization\nminimo = np.min(np.abs(MD))\nxy.imshow(MD[np.floor(frame*sz),:,:], origin='lower', interpolation='nearest', cmap=\"gray\",vmin=0, vmax=maximo )\nxz.imshow(MD[:,np.floor(frame*sy),:], origin='lower', interpolation='nearest', cmap=\"gray\",vmin=0 , vmax=maximo )\nyz.imshow(MD[:,:,np.floor(frame*sx)], origin='lower', interpolation='nearest', cmap=\"gray\",vmin=0 , vmax=maximo )", "First vector (main tensor direction)\nThis is a 3D vecotr field, so each voxel is associated with a vector.\nInline image of slices of the FA in three different viels (Axial, coronal, and sagittal viels).", "# Show Vector Field\n\n%matplotlib inline\n# %matplotlib notebook\nfrom matplotlib.widgets import Slider\n\nevt_d = evt[0]*evt[0]\n\nnv, sz, sy, sx = evl.shape\n\n\nfig = plt.figure(figsize=(15,15))\nxy = fig.add_subplot(1,3,1)\nplt.title(\"Axial Slice\")\nplt.axis(\"off\")\nxz = fig.add_subplot(1,3,2)\nplt.title(\"Coronal Slice\")\nplt.axis(\"off\")\nyz = fig.add_subplot(1,3,3)\nplt.title(\"Sagittal Slice\")\nplt.axis(\"off\")\n\nstep_ = 1 #Subamostragem dos vetores\nmaxlen_= 32 #Tamanho do maior vetor\nrescale_ = 16 #Fator de rescala da imagem\n\n# crop = np.array([sz/3, sz*2/3, sy/3, sy*2/3, sx/3, sy*2/3]) # crop [z<, z>, y<, y>, x< x>]\ncrop = np.array([30, 40, 120, 136, 120, 136]) # crop [z<, z>, y<, y>, x< x>]\n\nframe = 0.5\n# seismic\nV1 = DTI.show_vector_field(evt_d[1,np.floor(frame*sz),:,:], evt_d[2,np.floor(frame*sz),:,:], step=step_, maxlen=maxlen_, rescale=rescale_)\nxy.imshow(V1[0,:,:], origin='lower',cmap=\"gray\")\nV2 = DTI.show_vector_field(evt_d[0,:,np.floor(frame*sy),:], evt_d[2,:,np.floor(frame*sy),:], step=step_, maxlen=maxlen_, rescale=rescale_)\nxz.imshow(V2[0,:,:], origin='lower',cmap=\"gray\")\nV3 = DTI.show_vector_field(evt_d[0,:,:,np.floor(frame*sx)], evt_d[1,:,:,np.floor(frame*sx)], step=step_, maxlen=maxlen_, rescale=rescale_)\nyz.imshow(V3[0,:,:], origin='lower',cmap=\"gray\")\nplt.xticks([])\nplt.yticks([])\n\nfig = plt.figure(figsize=(15,15))\nxy = fig.add_subplot(1,3,1)\nplt.title(\"Axial Slice (zoom)\")\nplt.axis(\"off\")\nxz = fig.add_subplot(1,3,2)\nplt.title(\"Coronal Slice (zoom)\")\nplt.axis(\"off\")\nyz = fig.add_subplot(1,3,3)\nplt.title(\"Sagittal Slice (zoom)\")\nplt.axis(\"off\")\n\nV1 = DTI.show_vector_field(evt_d[1,np.floor(frame*sz),crop[2]:crop[3],crop[4]:crop[5]], evt_d[2,np.floor(frame*sz),crop[2]:crop[3],crop[4]:crop[5]], step=step_, maxlen=maxlen_, rescale=rescale_)\nxy.imshow(V1[0,:,:], origin='lower',cmap=\"gray\")\nV2 = DTI.show_vector_field(evt_d[0,crop[0]:crop[1],np.floor(frame*sy),crop[4]:crop[5]], evt_d[2,crop[0]:crop[1],np.floor(frame*sy),crop[4]:crop[5]], step=step_, maxlen=maxlen_, rescale=rescale_)\nxz.imshow(V2[0,:,:], origin='lower',cmap=\"gray\")\nV3 = DTI.show_vector_field(evt_d[0,crop[0]:crop[1],crop[2]:crop[3],np.floor(frame*sx)], evt_d[1,crop[0]:crop[1],crop[2]:crop[3],np.floor(frame*sx)], step=step_, maxlen=maxlen_, rescale=rescale_)\nyz.imshow(V3[0,:,:], origin='lower',cmap=\"gray\")\n\n\n\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jseabold/statsmodels
examples/notebooks/statespace_arma_0.ipynb
bsd-3-clause
[ "Autoregressive Moving Average (ARMA): Sunspots data\nThis notebook replicates the existing ARMA notebook using the statsmodels.tsa.statespace.SARIMAX class rather than the statsmodels.tsa.ARMA class.", "%matplotlib inline\n\nimport numpy as np\nfrom scipy import stats\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport statsmodels.api as sm\n\nfrom statsmodels.graphics.api import qqplot", "Sunspots Data", "print(sm.datasets.sunspots.NOTE)\n\ndta = sm.datasets.sunspots.load_pandas().data\n\ndta.index = pd.Index(pd.date_range(\"1700\", end=\"2009\", freq=\"A-DEC\"))\ndel dta[\"YEAR\"]\n\ndta.plot(figsize=(12,4));\n\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2)\n\narma_mod20 = sm.tsa.statespace.SARIMAX(dta, order=(2,0,0), trend='c').fit(disp=False)\nprint(arma_mod20.params)\n\narma_mod30 = sm.tsa.statespace.SARIMAX(dta, order=(3,0,0), trend='c').fit(disp=False)\n\nprint(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic)\n\nprint(arma_mod30.params)\n\nprint(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)", "Does our model obey the theory?", "sm.stats.durbin_watson(arma_mod30.resid)\n\nfig = plt.figure(figsize=(12,4))\nax = fig.add_subplot(111)\nax = plt.plot(arma_mod30.resid)\n\nresid = arma_mod30.resid\n\nstats.normaltest(resid)\n\nfig = plt.figure(figsize=(12,4))\nax = fig.add_subplot(111)\nfig = qqplot(resid, line='q', ax=ax, fit=True)\n\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(resid, lags=40, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)\n\nr,q,p = sm.tsa.acf(resid, fft=True, qstat=True)\ndata = np.c_[range(1,41), r[1:], q, p]\ntable = pd.DataFrame(data, columns=['lag', \"AC\", \"Q\", \"Prob(>Q)\"])\nprint(table.set_index('lag'))", "This indicates a lack of fit.\n\n\nIn-sample dynamic prediction. How good does our model do?", "predict_sunspots = arma_mod30.predict(start='1990', end='2012', dynamic=True)\n\nfig, ax = plt.subplots(figsize=(12, 8))\ndta.loc['1950':].plot(ax=ax)\npredict_sunspots.plot(ax=ax, style='r');\n\ndef mean_forecast_err(y, yhat):\n return y.sub(yhat).mean()\n\nmean_forecast_err(dta.SUNACTIVITY, predict_sunspots)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
julienchastang/unidata-python-workshop
notebooks/AWIPS/Grid_Levels_and_Parameters.ipynb
mit
[ "This example covers the callable methods of the Python AWIPS DAF when working with gridded data. We start with a connection to an EDEX server, then query data types, then grid names, parameters, levels, and other information. Finally the gridded data is plotted for its domain using Matplotlib and Cartopy.\nDataAccessLayer.getSupportedDatatypes()\ngetSupportedDatatypes() returns a list of available data types offered by the EDEX server defined above.", "from awips.dataaccess import DataAccessLayer\nDataAccessLayer.changeEDEXHost(\"edex-cloud.unidata.ucar.edu\")\ndataTypes = DataAccessLayer.getSupportedDatatypes()\nlist(dataTypes)", "DataAccessLayer.getAvailableLocationNames()\nNow create a new data request, and set the data type to grid to request all available grids with getAvailableLocationNames()", "request = DataAccessLayer.newDataRequest()\nrequest.setDatatype(\"grid\")\navailable_grids = DataAccessLayer.getAvailableLocationNames(request)\navailable_grids.sort()\nlist(available_grids)", "DataAccessLayer.getAvailableParameters()\nAfter datatype and model name (locationName) are set, you can query all available parameters with getAvailableParameters()", "request.setLocationNames(\"RAP13\")\navailableParms = DataAccessLayer.getAvailableParameters(request)\navailableParms.sort()\nlist(availableParms)", "DataAccessLayer.getAvailableLevels()\nSelecting \"T\" for temperature.", "request.setParameters(\"T\")\navailableLevels = DataAccessLayer.getAvailableLevels(request)\nfor level in availableLevels:\n print(level)", "0.0SFC is the Surface level\nFHAG stands for Fixed Height Above Ground (in meters)\nNTAT stands for Nominal Top of the ATmosphere\nBL stands for Boundary Layer, where 0.0_30.0BL reads as 0-30 mb above ground level \nTROP is the Tropopause level\n\nrequest.setLevels()\nFor this example we will use Surface Temperature", "request.setLevels(\"2.0FHAG\")", "DataAccessLayer.getAvailableTimes()\n\ngetAvailableTimes(request, True) will return an object of run times - formatted as YYYY-MM-DD HH:MM:SS\ngetAvailableTimes(request) will return an object of all times - formatted as YYYY-MM-DD HH:MM:SS (F:ff)\ngetForecastRun(cycle, times) will return a DataTime array for a single forecast cycle.", "cycles = DataAccessLayer.getAvailableTimes(request, True)\ntimes = DataAccessLayer.getAvailableTimes(request)\nfcstRun = DataAccessLayer.getForecastRun(cycles[-1], times)", "DataAccessLayer.getGridData()\nNow that we have our request and DataTime fcstRun arrays ready, it's time to request the data array from EDEX.", "response = DataAccessLayer.getGridData(request, [fcstRun[-1]])\nfor grid in response:\n data = grid.getRawData()\n lons, lats = grid.getLatLonCoords()\n print('Time :', str(grid.getDataTime()))\n\nprint('Model:', str(grid.getLocationName()))\nprint('Parm :', str(grid.getParameter()))\nprint('Unit :', str(grid.getUnit()))\nprint(data.shape)", "Plotting with Matplotlib and Cartopy\n1. pcolormesh", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nfrom cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER\nimport numpy as np\nimport numpy.ma as ma\nfrom scipy.io import loadmat\ndef make_map(bbox, projection=ccrs.PlateCarree()):\n fig, ax = plt.subplots(figsize=(16, 9),\n subplot_kw=dict(projection=projection))\n ax.set_extent(bbox)\n ax.coastlines(resolution='50m')\n gl = ax.gridlines(draw_labels=True)\n gl.xlabels_top = gl.ylabels_right = False\n gl.xformatter = LONGITUDE_FORMATTER\n gl.yformatter = LATITUDE_FORMATTER\n return fig, ax\n\ncmap = plt.get_cmap('rainbow')\nbbox = [lons.min(), lons.max(), lats.min(), lats.max()]\nfig, ax = make_map(bbox=bbox)\ncs = ax.pcolormesh(lons, lats, data, cmap=cmap)\ncbar = fig.colorbar(cs, extend='both', shrink=0.5, orientation='horizontal')\ncbar.set_label(grid.getLocationName().decode('UTF-8') +\" \" \\\n + grid.getLevel().decode('UTF-8') + \" \" \\\n + grid.getParameter().decode('UTF-8') \\\n + \" (\" + grid.getUnit().decode('UTF-8') + \") \" \\\n + \"valid \" + str(grid.getDataTime().getRefTime()))", "2. contourf", "fig2, ax2 = make_map(bbox=bbox)\ncs2 = ax2.contourf(lons, lats, data, 80, cmap=cmap,\n vmin=data.min(), vmax=data.max())\ncbar2 = fig2.colorbar(cs2, extend='both', shrink=0.5, orientation='horizontal')\ncbar2.set_label(grid.getLocationName().decode('UTF-8') +\" \" \\\n + grid.getLevel().decode('UTF-8') + \" \" \\\n + grid.getParameter().decode('UTF-8') \\\n + \" (\" + grid.getUnit().decode('UTF-8') + \") \" \\\n + \"valid \" + str(grid.getDataTime().getRefTime()))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mldbai/mldb
container_files/tutorials/Using pymldb Progress Bar and Cancel Button Tutorial.ipynb
apache-2.0
[ "Using pymldb's Progress Bar and Cancel Button Tutorial\nThis tutorial showcases the use of progress bars and cancel buttons for long-running procedures with pymldb with a Jupyter notebook. This allows a user to see the progress of a procedure as well as cancel it.\nIf you have not done so already, we encourage you to go through the Using pymldb Tutorial.\nHow does it work?\nTo use this feature, you only need to slightly modify the way you execute procedures. For example, when doing an HTTP PUT, you would go from using mldb.put() to mldb.put_and_track().\nThe cancel button is displayed as soon as the procedure run id is found. The button is removed as soon as the procedure finishes either normally or with an error.\nThe progress bar library used is tqdm/tqdm. Progress bars are displayed as soon as a procedure enters the \"executing\" state. Then they are refreshed at every interval for as long as the procedure stays in the \"executing\" state. They move to a valid state (they turn green) when a step/procedure finishes normally and to a danger state (they turn red) when they finish with an error.\nIf a procedure runs too quickly, the progress bars will not be displayed because the application logic will not have time to catch the \"executing\" phase. If a procedure stays in the \"initializing\" phase for some time, the \"Cancel\" button will be visible with no progress bars as long as the \"executing\" phase is not reached.\n⚠ Disclaimers\n\nThere is a known issue where the final value of the last progress bar may not reflect the real final value of what was done in MLDB. The reason for it is that once a procedure has finished running, it no longer reports how many items it processed for each step.\nDue to XSS (cross site scripting) restrictions, the cancel button provided with the progress bars will not work if the notebook is running on a different host than mldb itself.\n\nHere we start with the obligatory lines to import pymldb and initialize the connection to MLDB.", "import pymldb\nmldb = pymldb.Connection()", "Procedure with steps\nHere we post to a procedure with multiple steps. The steps are displayed as soon as the procedure starts running and are updated accordingly.", "print mldb.post_and_track('/v1/procedures', {\n 'type' : 'mock',\n 'params' : {'durationMs' : 8000, \"refreshRateMs\" : 500}\n }, 0.5)", "Procedure with no steps\nA procedure with no inner steps will simply display its progress.\nThis one is an example where the \"initializing\" phase sticks for some time, so the \"Cancel\" button is shown alone and eventually, when the \"executing\" phase is reached, the progress bar is displayed.", "print mldb.put_and_track('/v1/procedures/embedded_imagess', {\n 'type' : 'import.text',\n 'params' : {\n 'dataFileUrl' : 'https://s3.amazonaws.com/benchm-ml--main/train-1m.csv',\n 'outputDataset' : {\n 'id' : 'embedded_images_realestate',\n 'type' : 'sparse.mutable'\n }\n }\n}, 0.1)", "Serial procedure\nWhen using post_and_track along with a serial procedure, a progress bar is displayed for each step. They will only take the value of 0/1 and 1/1.", "prefix = 'file://mldb/mldb_test_data/dataset-builder'\nprint mldb.post_and_track('/v1/procedures', {\n 'type' : 'serial',\n 'params' : {\n 'steps' : [\n {\n 'type' : 'mock',\n 'params' : {'durationMs' : 2000, \"refreshRateMs\" : 500}\n }, {\n 'type' : 'import.text',\n 'params' : {\n 'dataFileUrl' : prefix + '/cache/dataset_creator_embedding_realestate.csv.gz',\n 'outputDataset' : {\n 'id' : 'embedded_images_realestate',\n 'type' : 'embedding'\n },\n 'select' : '* EXCLUDING(rowName)',\n 'named' : 'rowName',\n }\n }, {\n 'type' : 'mock',\n 'params' : {'durationMs' : 2000, \"refreshRateMs\" : 500}\n }\n ]\n }\n})", "Where to next?\nCheck out the other Tutorials and Demos." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
iamtrask/polyglot
notebooks/README.ipynb
gpl-3.0
[ "polyglot\n\n\n\n\nPolyglot is a natural language pipeline that supports massive multilingual applications.\n\nFree software: GPLv3 license\nDocumentation: http://polyglot.readthedocs.org.\n\nFeatures\n\nTokenization (165 Languages)\nLanguage detection (196 Languages)\nNamed Entity Recognition (40 Languages)\nPart of Speech Tagging (16 Languages)\nSentiment Analysis (136 Languages)\nWord Embeddings (137 Languages)\nMorphological analysis (135 Languages)\nTransliteration (69 Languages)\n\nDeveloper\n\nRami Al-Rfou @ rmyeid gmail com\n\nQuick Tutorial", "import polyglot\nfrom polyglot.text import Text, Word", "Language Detection", "text = Text(\"Bonjour, Mesdames.\")\nprint(\"Language Detected: Code={}, Name={}\\n\".format(text.language.code, text.language.name))", "Tokenization", "zen = Text(\"Beautiful is better than ugly. \"\n \"Explicit is better than implicit. \"\n \"Simple is better than complex.\")\nprint(zen.words)\n\nprint(zen.sentences)", "Part of Speech Tagging", "text = Text(u\"O primeiro uso de desobediência civil em massa ocorreu em setembro de 1906.\")\n\nprint(\"{:<16}{}\".format(\"Word\", \"POS Tag\")+\"\\n\"+\"-\"*30)\nfor word, tag in text.pos_tags:\n print(u\"{:<16}{:>2}\".format(word, tag))", "Named Entity Recognition", "text = Text(u\"In Großbritannien war Gandhi mit dem westlichen Lebensstil vertraut geworden\")\nprint(text.entities)", "Polarity", "print(\"{:<16}{}\".format(\"Word\", \"Polarity\")+\"\\n\"+\"-\"*30)\nfor w in zen.words[:6]:\n print(\"{:<16}{:>2}\".format(w, w.polarity))", "Embeddings", "word = Word(\"Obama\", language=\"en\")\nprint(\"Neighbors (Synonms) of {}\".format(word)+\"\\n\"+\"-\"*30)\nfor w in word.neighbors:\n print(\"{:<16}\".format(w))\nprint(\"\\n\\nThe first 10 dimensions out the {} dimensions\\n\".format(word.vector.shape[0]))\nprint(word.vector[:10])", "Morphology", "word = Text(\"Preprocessing is an essential step.\").words[0]\nprint(word.morphemes)", "Transliteration", "from polyglot.transliteration import Transliterator\ntransliterator = Transliterator(source_lang=\"en\", target_lang=\"ru\")\nprint(transliterator.transliterate(u\"preprocessing\"))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
QuantEcon/phd_workshops
John/numba.ipynb
bsd-3-clause
[ "Vectorization and JIT compilation in Python\nExamples for the QuantEcon 2017 PhD workshops\nJohn Stachurski", "import numpy as np\nfrom numba import vectorize, jit, float64\nfrom quantecon.util import tic, toc\nimport matplotlib.pyplot as plt", "Problem 1: A Time Series Model\nConsider the time series model\n$$ x_{t+1} = \\alpha x_t (1 - x_t) $$\nLet's set $\\alpha = 4$", "α = 4", "Here's a typical time series:", "n = 200\nx = np.empty(n)\nx[0] = 0.2\nfor t in range(n-1):\n x[t+1] = α * x[t] * (1 - x[t])\n \nplt.plot(x)\nplt.show()", "Here's a function that simulates for n periods, starting from x0, and returns only the final value:", "def quad(x0, n):\n x = x0\n for i in range(1, n):\n x = α * x * (1 - x)\n return x", "Let's see how fast this runs:", "n = 10_000_000\n\ntic()\nx = quad(0.2, n)\ntoc()", "Now let's try this in FORTRAN. \nNote --- this step is intended to be a demo and will only execute if\n\nyou have the file fastquad.f90 in your pwd\nyou have a FORTRAN compiler installed and modify the compilation code below appropriately", "!cat fastquad.f90\n\n!gfortran -O3 fastquad.f90\n\n!./a.out", "Now let's do the same thing in Python using Numba's JIT compilation:", "quad_jitted = jit(quad)\n\ntic()\nx = quad_jitted(0.2, n)\ntoc()\n\ntic()\nx = quad_jitted(0.2, n)\ntoc()", "After JIT compilation, function execution speed is about the same as FORTRAN.\nBut remember, JIT compilation for Python is still limited --- see here\nIf these limitations frustrate you, then try Julia.\nProblem 2: Brute Force Optimization\nThe problem is to maximize the function \n$$ f(x, y) = \\frac{\\cos \\left(x^2 + y^2 \\right)}{1 + x^2 + y^2} + 1$$\nusing brute force --- searching over a grid of $(x, y)$ pairs.", "def f(x, y):\n return np.cos(x**2 + y**2) / (1 + x**2 + y**2) + 1\n\n\nfrom mpl_toolkits.mplot3d.axes3d import Axes3D\nfrom matplotlib import cm\n\ngridsize = 50\ngmin, gmax = -3, 3\nxgrid = np.linspace(gmin, gmax, gridsize)\nygrid = xgrid\nx, y = np.meshgrid(xgrid, ygrid)\n\n# === plot value function === #\nfig = plt.figure(figsize=(10, 8))\nax = fig.add_subplot(111, projection='3d')\nax.plot_surface(x,\n y,\n f(x, y),\n rstride=2, cstride=2,\n cmap=cm.jet,\n alpha=0.4,\n linewidth=0.05)\n\n\nax.scatter(x, y, c='k', s=0.6)\n\nax.scatter(x, y, f(x, y), c='k', s=0.6)\n\nax.view_init(25, -57)\nax.set_zlim(-0, 2.0)\nax.set_xlim(gmin, gmax)\nax.set_ylim(gmin, gmax)\n\nplt.show()\n", "Vectorized code", "grid = np.linspace(-3, 3, 10000)\n\nx, y = np.meshgrid(grid, grid)\n\ntic()\nnp.max(f(x, y))\ntoc()", "JITTed code\nA jitted version", "@jit\ndef compute_max():\n m = -np.inf\n for x in grid:\n for y in grid:\n z = np.cos(x**2 + y**2) / (1 + x**2 + y**2) + 1\n if z > m:\n m = z\n return m\n\ncompute_max()\n\ntic()\ncompute_max()\ntoc()", "Numba for vectorization with automatic parallelization - even faster:", "@vectorize('float64(float64, float64)', target='parallel')\ndef f_par(x, y):\n return np.cos(x**2 + y**2) / (1 + x**2 + y**2) + 1\n\nx, y = np.meshgrid(grid, grid)\n\nnp.max(f_par(x, y))\n\ntic()\nnp.max(f_par(x, y))\ntoc()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.17/_downloads/1b26761ba88c6441bd13afd5730965a4/plot_stats_spatio_temporal_cluster_sensors.ipynb
bsd-3-clause
[ "%matplotlib inline", "Spatiotemporal permutation F-test on full sensor data\nTests for differential evoked responses in at least\none condition using a permutation clustering test.\nThe FieldTrip neighbor templates will be used to determine\nthe adjacency between sensors. This serves as a spatial prior\nto the clustering. Spatiotemporal clusters will then\nbe visualized using custom matplotlib code.\nCaveat for the interpretation of \"significant\" clusters: see\nthe FieldTrip website_.", "# Authors: Denis Engemann <denis.engemann@gmail.com>\n# Jona Sassenhagen <jona.sassenhagen@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nfrom mne.viz import plot_topomap\n\nimport mne\nfrom mne.stats import spatio_temporal_cluster_test\nfrom mne.datasets import sample\nfrom mne.channels import find_ch_connectivity\nfrom mne.viz import plot_compare_evokeds\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id = {'Aud/L': 1, 'Aud/R': 2, 'Vis/L': 3, 'Vis/R': 4}\ntmin = -0.2\ntmax = 0.5\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.filter(1, 30, fir_design='firwin')\nevents = mne.read_events(event_fname)", "Read epochs for the channel of interest", "picks = mne.pick_types(raw.info, meg='mag', eog=True)\n\nreject = dict(mag=4e-12, eog=150e-6)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=None, reject=reject, preload=True)\n\nepochs.drop_channels(['EOG 061'])\nepochs.equalize_event_counts(event_id)\n\nX = [epochs[k].get_data() for k in event_id] # as 3D matrix\nX = [np.transpose(x, (0, 2, 1)) for x in X] # transpose for clustering", "Find the FieldTrip neighbor definition to setup sensor connectivity", "connectivity, ch_names = find_ch_connectivity(epochs.info, ch_type='mag')\n\nprint(type(connectivity)) # it's a sparse matrix!\n\nplt.imshow(connectivity.toarray(), cmap='gray', origin='lower',\n interpolation='nearest')\nplt.xlabel('{} Magnetometers'.format(len(ch_names)))\nplt.ylabel('{} Magnetometers'.format(len(ch_names)))\nplt.title('Between-sensor adjacency')", "Compute permutation statistic\nHow does it work? We use clustering to bind together features which are\nsimilar. Our features are the magnetic fields measured over our sensor\narray at different times. This reduces the multiple comparison problem.\nTo compute the actual test-statistic, we first sum all F-values in all\nclusters. We end up with one statistic for each cluster.\nThen we generate a distribution from the data by shuffling our conditions\nbetween our samples and recomputing our clusters and the test statistics.\nWe test for the significance of a given cluster by computing the probability\nof observing a cluster of that size. For more background read:\nMaris/Oostenveld (2007), \"Nonparametric statistical testing of EEG- and\nMEG-data\" Journal of Neuroscience Methods, Vol. 164, No. 1., pp. 177-190.\ndoi:10.1016/j.jneumeth.2007.03.024", "# set cluster threshold\nthreshold = 50.0 # very high, but the test is quite sensitive on this data\n# set family-wise p-value\np_accept = 0.01\n\ncluster_stats = spatio_temporal_cluster_test(X, n_permutations=1000,\n threshold=threshold, tail=1,\n n_jobs=1, buffer_size=None,\n connectivity=connectivity)\n\nT_obs, clusters, p_values, _ = cluster_stats\ngood_cluster_inds = np.where(p_values < p_accept)[0]", "Note. The same functions work with source estimate. The only differences\nare the origin of the data, the size, and the connectivity definition.\nIt can be used for single trials or for groups of subjects.\nVisualize clusters", "# configure variables for visualization\ncolors = {\"Aud\": \"crimson\", \"Vis\": 'steelblue'}\nlinestyles = {\"L\": '-', \"R\": '--'}\n\n# get sensor positions via layout\npos = mne.find_layout(epochs.info).pos\n\n# organize data for plotting\nevokeds = {cond: epochs[cond].average() for cond in event_id}\n\n# loop over clusters\nfor i_clu, clu_idx in enumerate(good_cluster_inds):\n # unpack cluster information, get unique indices\n time_inds, space_inds = np.squeeze(clusters[clu_idx])\n ch_inds = np.unique(space_inds)\n time_inds = np.unique(time_inds)\n\n # get topography for F stat\n f_map = T_obs[time_inds, ...].mean(axis=0)\n\n # get signals at the sensors contributing to the cluster\n sig_times = epochs.times[time_inds]\n\n # create spatial mask\n mask = np.zeros((f_map.shape[0], 1), dtype=bool)\n mask[ch_inds, :] = True\n\n # initialize figure\n fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))\n\n # plot average test statistic and mark significant sensors\n image, _ = plot_topomap(f_map, pos, mask=mask, axes=ax_topo, cmap='Reds',\n vmin=np.min, vmax=np.max, show=False)\n\n # create additional axes (for ERF and colorbar)\n divider = make_axes_locatable(ax_topo)\n\n # add axes for colorbar\n ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)\n plt.colorbar(image, cax=ax_colorbar)\n ax_topo.set_xlabel(\n 'Averaged F-map ({:0.3f} - {:0.3f} s)'.format(*sig_times[[0, -1]]))\n\n # add new axis for time courses and plot time courses\n ax_signals = divider.append_axes('right', size='300%', pad=1.2)\n title = 'Cluster #{0}, {1} sensor'.format(i_clu + 1, len(ch_inds))\n if len(ch_inds) > 1:\n title += \"s (mean)\"\n plot_compare_evokeds(evokeds, title=title, picks=ch_inds, axes=ax_signals,\n colors=colors, linestyles=linestyles, show=False,\n split_legend=True, truncate_yaxis='max_ticks')\n\n # plot temporal cluster extent\n ymin, ymax = ax_signals.get_ylim()\n ax_signals.fill_betweenx((ymin, ymax), sig_times[0], sig_times[-1],\n color='orange', alpha=0.3)\n\n # clean up viz\n mne.viz.tight_layout(fig=fig)\n fig.subplots_adjust(bottom=.05)\n plt.show()", "Exercises\n\nWhat is the smallest p-value you can obtain, given the finite number of\n permutations?\nuse an F distribution to compute the threshold by traditional significance\n levels. Hint: take a look at :obj:scipy.stats.f\n\nReferences" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DaveBackus/Data_Bootcamp
Code/Lab/Airbnb.ipynb
mit
[ "import sys\nimport pandas as pd\nimport matplotlib as ml\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nprint(\"Python version: \", sys.version)\nprint(\"Pandas version: \", pd.__version__)\nprint(\"Matplotlib version: \", ml.__version__)", "Airbnb Data\nFirst we read in the data", "url1 = \"http://data.insideairbnb.com/united-states/\"\nurl2 = \"ny/new-york-city/2016-02-02/data/listings.csv.gz\"\nfull_df = pd.read_csv(url1+url2, compression=\"gzip\")\n\nfull_df.head()", "We don't want all data, so let's focus on a few variables.", "df = full_df[[\"id\", \"price\", \"number_of_reviews\", \"review_scores_rating\"]]\n\ndf.head()", "Need to convert prices to floats", "df.replace({'price': {'\\$': ''}}, regex=True, inplace=True)\ndf.replace({'price': {'\\,': ''}}, regex=True, inplace=True)\ndf['price'] = df['price'].astype('float64', copy=False)", "We might think that better apartments get rented more often, let's plot a scatter (or multiple boxes?) plot of the number of reviews vs the review score", "df.plot.scatter(x=\"number_of_reviews\", y=\"review_scores_rating\", figsize=(10, 8), alpha=0.2)\n\nbins = [0, 5, 10, 25, 50, 100, 350]\nboxplot_vecs = []\n\nfig, ax = plt.subplots(figsize=(10, 8))\n\nfor i in range(1, 7):\n lb = bins[i-1]\n ub = bins[i]\n foo = df[\"review_scores_rating\"][df[\"number_of_reviews\"].apply(lambda x: lb <= x <= ub)].dropna()\n boxplot_vecs.append(foo.values)\n \nax.boxplot(boxplot_vecs, labels=bins[:-1])\nplt.show()", "Better reviews also are correlated with higher prices", "df.plot.scatter(x=\"review_scores_rating\", y=\"price\", figsize=(10, 8), alpha=0.2)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
epfl-lts2/pygsp
examples/playground.ipynb
bsd-3-clause
[ "Playing with the PyGSP\nhttps://github.com/epfl-lts2/pygsp", "%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom pygsp import graphs, filters\n\nplt.rcParams['figure.figsize'] = (17, 5)", "1 Example\nThe following demonstrates how to instantiate a graph and a filter, the two main objects of the package.", "G = graphs.Logo()\nG.estimate_lmax()\ng = filters.Heat(G, tau=100)", "Let's now create a graph signal: a set of three Kronecker deltas for that example. We can now look at one step of heat diffusion by filtering the deltas with the above defined filter. Note how the diffusion follows the local structure!", "DELTAS = [20, 30, 1090]\ns = np.zeros(G.N)\ns[DELTAS] = 1\ns = g.filter(s)\nG.plot(s, highlight=DELTAS, backend='matplotlib')", "2 Tutorials and examples\nTry our tutorials or examples.", "# Your code here.", "3 Playground\nTry something of your own!\nThe API reference is your friend.", "# Your code here.", "If you miss a package, you can install it with:", "%pip install numpy" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
landlab/landlab
notebooks/tutorials/hillslope_geomorphology/transport-length_hillslope_diffuser/TLHDiff_tutorial.ipynb
mit
[ "<a href=\"http://landlab.github.io\"><img style=\"float: left\" src=\"../../landlab_header.png\"></a>\nThe transport-length hillslope diffuser\n<hr>\n<small>For more Landlab tutorials, click here: <a href=\"https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html\">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>\n<hr>\n\nThis Jupyter notebook illustrates running the transport-length-model hillslope diffusion component in a simple example.\nThe Basics\nThis component uses an approach similar to the Davy and Lague (2009) equation for fluvial erosion and transport, and applies it to hillslope diffusion. The formulation and implementation were inspired by Carretier et al. (2016); see this paper and references therein for justification.\nTheory\nThe elevation $z$ of a point of the landscape (such as a grid node) changes according to:\n\\begin{equation}\n \\frac{\\partial z}{\\partial t} = -\\epsilon + D + U \\tag{1}\\label{eq:1},\n\\end{equation}\nand we define:\n\\begin{equation}\n D = \\frac{q_s}{L} \\tag{2}\\label{eq:2},\n\\end{equation}\nwhere $\\epsilon$ is the local erosion rate [L/T], $D$ the local deposition rate [L/T], $U$ the uplift (or subsidence) rate [L/T], $q_s$ the incoming sediment flux per unit width [L$^2$/T] and $L$ is the transport length.\nWe specify the erosion rate $\\epsilon$ and the transport length $L$:\n\\begin{equation}\n \\epsilon = \\kappa S \\tag{3}\\label{eq:3}\n\\end{equation}\n\\begin{equation}\n L = \\frac{dx}{1-({S}/{S_c})^2} \\tag{4}\\label{eq:4}\n\\end{equation}\nwhere $\\kappa$ [L/T] is an erodibility coefficient, $S$ is the local slope [L/L] and $S_c$ is the critical slope [L/L]. \nThus, the elevation variation results from the difference between local rates of detachment and deposition. \nThe detachment rate is proportional to the local gradient. However, the deposition rate ($q_s/L$) depends on the local slope and the critical slope:\n- when $S \\ll S_c$, most of the sediment entering a node is deposited there, this is the pure diffusion case. In this case, the sediment flux $q_s$ does not include sediment eroded from above and is thus \"local\".\n- when $S \\approx S_c$, $L$ becomes infinity and there is no redeposition on the node, the sediments are transferred further downstream. This behaviour corresponds to mass wasting, grains can travel a long distance before being deposited. In that case, the flux $q_s$ is \"non-local\" as it incorporates sediments that have both been detached locally and transited from upslope.\n- for an intermediate $S$, there is a prgogressive transition between pure creep and \"balistic\" transport of the material. This is consistent with experiments (Roering et al., 2001; Gabet and Mendoza, 2012).\nContrast with the non-linear diffusion model\nPrevious models typically use a \"non-linear\" diffusion model proposed by different authors (e.g. Andrews and Hanks, 1985; Hanks, 1999; Roering et al., 1999) and supported by $^{10}$Be-derived erosion rates (e.g. Binnie et al., 2007) or experiments (Roering et al., 2001). It is usually presented in the followin form:\n$ $\n\\begin{equation} \n \\frac{\\partial z}{\\partial t} = \\frac{\\partial q_s}{\\partial x} \\tag{5}\\label{eq:5}\n\\end{equation}\n$ $\n\\begin{equation}\n q_s = \\frac{\\kappa' S}{1-({S}/{S_c})^2} \\tag{6}\\label{eq:6}\n\\end{equation}\nwhere $\\kappa'$ [L$^2$/T] is a diffusion coefficient.\nThis description is thus based on the definition of a flux of transported sediment parallel to the slope:\n- when the slope is small, this flux refers to diffusion-like processes such as biogenic soil disturbance, rain splash, or diffuse runoff\n- when the slope gets closer to the specified critical slope, the flux increases dramatically, simulating on average the cumulative effect of mass wasting events.\nDespite these conceptual differences, equations ($\\ref{eq:3}$) and ($\\ref{eq:4}$) predict similar topographic evolution to the 'non-linear' diffusion equations for $\\kappa' = \\kappa dx$, as shown in the following example.\nExample 1:\nFirst, we import what we'll need:", "import numpy as np\nfrom matplotlib.pyplot import figure, plot, show, title, xlabel, ylabel\n\nfrom landlab import RasterModelGrid\nfrom landlab.components import FlowDirectorSteepest, TransportLengthHillslopeDiffuser\nfrom landlab.plot import imshow_grid\n\n# to plot figures in the notebook:\n%matplotlib inline", "Make a grid and set boundary conditions:", "mg = RasterModelGrid(\n (20, 20), xy_spacing=50.0\n) # raster grid with 20 rows, 20 columns and dx=50m\nz = np.random.rand(mg.size(\"node\")) # random noise for initial topography\nmg.add_field(\"topographic__elevation\", z, at=\"node\")\n\nmg.set_closed_boundaries_at_grid_edges(\n False, True, False, True\n) # N and S boundaries are closed, E and W are open", "Set the initial and run conditions:", "total_t = 2000000.0 # total run time (yr)\ndt = 1000.0 # time step (yr)\nnt = int(total_t // dt) # number of time steps\nuplift_rate = 0.0001 # uplift rate (m/yr)\n\nkappa = 0.001 # erodibility (m/yr)\nSc = 0.6 # critical slope", "Instantiate the components:\nThe hillslope diffusion component must be used together with a flow router/director that provides the steepest downstream slope for each node, with a D4 method (creates the field topographic__steepest_slope at nodes).", "fdir = FlowDirectorSteepest(mg)\ntl_diff = TransportLengthHillslopeDiffuser(mg, erodibility=kappa, slope_crit=Sc)", "Run the components for 2 Myr and trace an East-West cross-section of the topography every 100 kyr:", "for t in range(nt):\n fdir.run_one_step()\n tl_diff.run_one_step(dt)\n z[mg.core_nodes] += uplift_rate * dt # add the uplift\n\n # add some output to let us see we aren't hanging:\n if t % 100 == 0:\n print(t * dt)\n\n # plot east-west cross-section of topography:\n x_plot = range(0, 1000, 50)\n z_plot = z[100:120]\n figure(\"cross-section\")\n plot(x_plot, z_plot)\n\nfigure(\"cross-section\")\ntitle(\"East-West cross section\")\nxlabel(\"x (m)\")\nylabel(\"z (m)\")", "And plot final topography:", "figure(\"final topography\")\nim = imshow_grid(\n mg, \"topographic__elevation\", grid_units=[\"m\", \"m\"], var_name=\"Elevation (m)\"\n)", "This behaviour corresponds to the evolution observed using a classical non-linear diffusion model.\nExample 2:\nIn this example, we show that when the slope is steep ($S \\ge S_c$), the transport-length hillsope diffusion simulates mass wasting, with long transport distances.\nFirst, we create a grid: the western half of the grid is flat at 0 m of elevation, the eastern half is a 45-degree slope.", "# Create grid and topographic elevation field:\nmg2 = RasterModelGrid((20, 20), xy_spacing=50.0)\n\nz = np.zeros(mg2.number_of_nodes)\nz[mg2.node_x > 500] = mg2.node_x[mg2.node_x > 500] / 10\nmg2.add_field(\"topographic__elevation\", z, at=\"node\")\n\n# Set boundary conditions:\nmg2.set_closed_boundaries_at_grid_edges(False, True, False, True)\n\n# Show initial topography:\nim = imshow_grid(\n mg2, \"topographic__elevation\", grid_units=[\"m\", \"m\"], var_name=\"Elevation (m)\"\n)\n\n# Plot an east-west cross-section of the initial topography:\nz_plot = z[100:120]\nx_plot = range(0, 1000, 50)\nfigure(2)\nplot(x_plot, z_plot)\ntitle(\"East-West cross section\")\nxlabel(\"x (m)\")\nylabel(\"z (m)\")", "Set the run conditions:", "total_t = 1000000.0 # total run time (yr)\ndt = 1000.0 # time step (yr)\nnt = int(total_t // dt) # number of time steps", "Instantiate the components:", "fdir = FlowDirectorSteepest(mg2)\ntl_diff = TransportLengthHillslopeDiffuser(mg2, erodibility=0.001, slope_crit=0.6)", "Run for 1 Myr, plotting the cross-section regularly:", "for t in range(nt):\n fdir.run_one_step()\n tl_diff.run_one_step(dt)\n\n # add some output to let us see we aren't hanging:\n if t % 100 == 0:\n print(t * dt)\n z_plot = z[100:120]\n figure(2)\n plot(x_plot, z_plot)", "The material is diffused from the top and along the slope and it accumulates at the bottom, where the topography flattens.\nAs a comparison, the following code uses linear diffusion on the same slope:", "# Import Linear diffuser:\nfrom landlab.components import LinearDiffuser\n\n# Create grid and topographic elevation field:\nmg3 = RasterModelGrid((20, 20), xy_spacing=50.0)\nz = np.ones(mg3.number_of_nodes)\nz[mg.node_x > 500] = mg.node_x[mg.node_x > 500] / 10\nmg3.add_field(\"topographic__elevation\", z, at=\"node\")\n\n# Set boundary conditions:\nmg3.set_closed_boundaries_at_grid_edges(False, True, False, True)\n\n# Instantiate components:\nfdir = FlowDirectorSteepest(mg3)\ndiff = LinearDiffuser(mg3, linear_diffusivity=0.1)\n\n# Set run conditions:\ntotal_t = 1000000.0\ndt = 1000.0\nnt = int(total_t // dt)\n\n# Run for 1 Myr, plotting east-west cross-section regularly:\nfor t in range(nt):\n fdir.run_one_step()\n diff.run_one_step(dt)\n\n # add some output to let us see we aren't hanging:\n if t % 100 == 0:\n print(t * dt)\n z_plot = z[100:120]\n figure(2)\n plot(x_plot, z_plot)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jluttine/bayespy
doc/source/examples/multinomial.ipynb
mit
[ "# Some setting up stuff. This cell is hidden from the Sphinx-rendered documentation.\n%load_ext tikzmagic\n%matplotlib inline\n%config InlineBackend.figure_format = 'png'\nnp.random.seed(42)", "Multinomial distribution: bags of marbles\nWritten by: Deebul Nair (2016)\nEdited by: Jaakko Luttinen (2016)\nInspired by https://probmods.org/hierarchical-models.html\nUsing multinomial distribution\nThere are several bags of coloured marbles, each bag containing different amounts of each color. Marbles are drawn at random with replacement from the bags. The goal is to predict the distribution of the marbles in each bag.\nData generation\nLet us create a dataset. First, decide the number of bags, colors and trials (i.e., draws):", "n_colors = 5 # number of possible colors\nn_bags = 3 # number of bags\nn_trials = 20 # number of draws from each bag", "Generate randomly a color distribution for each bag:", "from bayespy import nodes\nimport numpy as np\n\np_colors = nodes.Dirichlet(n_colors * [0.5], plates=(n_bags,)).random()", "The concentration parameter $\\begin{bmatrix}0.5 & \\ldots & 0.5\\end{bmatrix}$ makes the distributions very non-uniform within each bag, that is, the amount of each color can be very different. We can visualize the probability distribution of the colors in each bag:", "import bayespy.plot as bpplt\nbpplt.hinton(p_colors)\nbpplt.pyplot.title(\"Original probability distributions of colors in the bags\");", "As one can see, the color distributions aren't very uniform in any of the bags because of the small concentration parameter. Next, make the ball draws:", "marbles = nodes.Multinomial(n_trials, p_colors).random()\nprint(marbles)", "Model\nWe will use the same generative model for estimating the color distributions in the bags as we did for generating the data:\n$$\n\\theta_i \\sim \\mathrm{Dirichlet}\\left(\\begin{bmatrix} 0.5 & \\ldots & 0.5 \\end{bmatrix}\\right)\n$$\n$$\ny_i | \\theta_i \\sim \\mathrm{Multinomial}(\\theta_i)\n$$\nThe simple graphical model can be drawn as below:", "%%tikz -f svg\n\\usetikzlibrary{bayesnet}\n\\node [latent] (theta) {$\\theta$};\n\\node [below=of theta, obs] (y) {$y$};\n\\edge {theta} {y};\n\\plate {trials} {(y)} {trials};\n\\plate {bags} {(theta)(y)(trials)} {bags};", "The model is constructed equivalently to the generative model (except we don't use the nodes to draw random samples):", "theta = nodes.Dirichlet(n_colors * [0.5], plates=(n_bags,))\ny = nodes.Multinomial(n_trials, theta)", "Data is provided by using the observe method:", "y.observe(marbles)", "Performing Inference", "from bayespy.inference import VB\nQ = VB(y, theta)\nQ.update(repeat=1000)\n\nimport bayespy.plot as bpplt\nbpplt.hinton(theta)\nbpplt.pyplot.title(\"Learned distribution of colors\")\nbpplt.pyplot.show()", "Using categorical Distribution\nThe same problem can be solved with categorical distirbution. Categorical distribution is similar to the Multinomical distribution expect for the output it produces.\nMultinomial and Categorical infer the number of colors from the size of the probability vector (p_theta)\nCategorical data is in a form where the value tells the index of the color that was picked in a trial. so if n_colors=5, Categorical data could be [4, 4, 0, 1, 1, 2, 4] if the number of trials was 7. \nmultinomial data is such that you have a vector where each element tells how many times that color was picked, for instance, [3, 0, 4] if you have 7 trials.\nSo there is significant difference in Multinomial and Categorical data . Depending on the data you have the choice of the Distribution has to be made.\nNow we can see an example of Hierarchical model usign categorical data generator and model", "from bayespy import nodes\nimport numpy as np\n\n#The marbles drawn based on the distribution for 10 trials\n# Using same p_color distribution as in the above example\ndraw_marbles = nodes.Categorical(p_colors,\n plates=(n_trials, n_bags)).random()", "Model", "from bayespy import nodes\nimport numpy as np\n\np_theta = nodes.Dirichlet(np.ones(n_colors),\n plates=(n_bags,),\n name='p_theta')\n\nbag_model = nodes.Categorical(p_theta,\n plates=(n_trials, n_bags),\n name='bag_model')", "Inference", "bag_model.observe(draw_marbles)\n\nfrom bayespy.inference import VB\nQ = VB(bag_model, p_theta)\n\nQ.update(repeat=1000)\n\n%matplotlib inline\nimport bayespy.plot as bpplt\nbpplt.hinton(p_theta)\nbpplt.pyplot.tight_layout()\nbpplt.pyplot.title(\"Learned Distribution of colors using Categorical Distribution\")\nbpplt.pyplot.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CalPolyPat/phys202-2015-work
assignments/assignment07/AlgorithmsEx01.ipynb
mit
[ "Algorithms Exercise 1\nImports", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\nimport re", "Word counting\nWrite a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:\n\nSplit the string into lines using splitlines.\nSplit each line into a list of words and merge the lists for each line.\nUse Python's builtin filter function to remove all punctuation.\nIf stop_words is a list, remove all occurences of the words in the list.\nIf stop_words is a space delimeted string of words, split them and remove them.\nRemove any remaining empty words.\nMake all words lowercase.", "def tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\\:;\"<,>.?/}\\t'):\n \"\"\"Split a string into a list of words, removing punctuation and stop words.\"\"\"\n if type(stop_words)==str:\n stopwords=list(stop_words.split(\" \"))\n else: \n stopwords=stop_words\n lines = s.splitlines()\n words = [re.split(\" |--|-\", line) for line in lines]\n filtwords = []\n# stopfiltwords = []\n for w in words:\n for ch in w:\n result = list(filter(lambda x:x not in punctuation, ch))\n filtwords.append(\"\".join(result))\n if stopwords != None:\n filtwords=list(filter(lambda x:x not in stopwords and x != '', filtwords))\n filtwords=[f.lower() for f in filtwords]\n return filtwords\n\nassert tokenize(\"This, is the way; that things will end\", stop_words=['the', 'is']) == \\\n ['this', 'way', 'that', 'things', 'will', 'end']\nwasteland = \"\"\"APRIL is the cruellest month, breeding\nLilacs out of the dead land, mixing\nMemory and desire, stirring\nDull roots with spring rain.\n\"\"\"\n\nassert tokenize(wasteland, stop_words='is the of and') == \\\n ['april','cruellest','month','breeding','lilacs','out','dead','land',\n 'mixing','memory','desire','stirring','dull','roots','with','spring',\n 'rain']\nassert tokenize(\"hello--world\")==['hello', 'world']", "Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.", "def count_words(data):\n \"\"\"Return a word count dictionary from the list of words in data.\"\"\"\n wordcount={}\n for d in data:\n if d in wordcount:\n wordcount[d] += 1\n else:\n wordcount[d] = 1\n return wordcount\n\nassert count_words(tokenize('this and the this from and a a a')) == \\\n {'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}", "Write a function sort_word_counts that return a list of sorted word counts:\n\nEach element of the list should be a (word, count) tuple.\nThe list should be sorted by the word counts, with the higest counts coming first.\nTo perform this sort, look at using the sorted function with a custom key and reverse\n argument.", "def sort_word_counts(wc):\n \"\"\"Return a list of 2-tuples of (word, count), sorted by count descending.\"\"\"\n def getkey(item):\n return item[1]\n sortedwords = [(i,wc[i]) for i in wc]\n return sorted(sortedwords, key=getkey, reverse=True)\n\nassert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \\\n [('a', 4), ('this', 3), ('and', 2), ('the', 1)]", "Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:\n\nRead the file into a string.\nTokenize with stop words of 'the of and a to in is it that as'.\nPerform a word count, the sort and save the result in a variable named swc.", "f = open('mobydick_chapter1.txt', 'r')\nswc = sort_word_counts(count_words(tokenize(f.read(), stop_words='the of and a to in is it that as')))\nprint(len(swc))\n\n\nassert swc[0]==('i',43)\nassert len(swc)==849\n\n#I changed the assert to length 849 instead of 848. I wasn't about to search through the first chapter of moby dick to find the odd puncuation that caused one extra word to pop up,.", "Create a \"Cleveland Style\" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...", "words50 = np.array(swc)\nf=plt.figure(figsize=(25,5))\nplt.plot(np.linspace(0,50,50), words50[:50,1], 'ko')\nplt.xlim(0,50)\nplt.xticks(np.linspace(0,50,50),words50[:50,0]);\n\nassert True # use this for grading the dotplot" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
telescopeuser/workshop_blog
wechat_tool_py3_local/terminal-script-py/lesson_6_terminal_py3.ipynb
mit
[ "<img src='https://www.iss.nus.edu.sg/Sitefinity/WebsiteTemplates/ISS/App_Themes/ISS/Images/branding-iss.png' width=15% style=\"float: right;\">\n<img src='https://www.iss.nus.edu.sg/Sitefinity/WebsiteTemplates/ISS/App_Themes/ISS/Images/branding-nus.png' width=15% style=\"float: right;\">", "# import IPython.display\n# IPython.display.YouTubeVideo('TBD')", "如何使用和开发微信聊天机器人的系列教程\nA workshop to develop & use an intelligent and interactive chat-bot in WeChat\nWeChat is a popular social media app, which has more than 800 million monthly active users.\n<img src='https://www.iss.nus.edu.sg/images/default-source/About-Us/7.6.1-teaching-staff/sam-website.tmb-.png' width=8% style=\"float: right;\">\n<img src='../reference/WeChat_SamGu_QR.png' width=10% style=\"float: right;\">\nby: GU Zhan (Sam)\nOctober 2018 : Update to support Python 3 in local machine, e.g. iss-vm.\nApril 2017 ======= Scan the QR code to become trainer's friend in WeChat =====>>\n第六课:交互式虚拟助手的智能应用\nLesson 6: Interactive Conversatioinal Virtual Assistant Applications / Intelligent Process Automations\n\n虚拟员工: 贷款填表申请审批一条龙自动化流程 (Virtual Worker: When Chat-bot meets RPA-bot for mortgage loan application automation) \n虚拟员工: 文字指令交互(Conversational automation using text/message command) \n虚拟员工: 语音指令交互(Conversational automation using speech/voice command) \n虚拟员工: 多种语言交互(Conversational automation with multiple languages)\n\nUsing Google Cloud Platform's Machine Learning APIs\nFrom the same API console, choose \"Dashboard\" on the left-hand menu and \"Enable API\".\nEnable the following APIs for your project (search for them) if they are not already enabled:\n<ol>\n**<li> Google Cloud Speech API </li>**\n**<li> Google Cloud Text-to-Speech API </li>**\n**<li> Google Cloud Translation API </li>**\n</ol>\n\nFinally, because we are calling the APIs from Python (clients in many other languages are available), let's install the Python package (it's not installed by default on Datalab)", "# Copyright 2016 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); \n# !pip install --upgrade google-api-python-client", "<span style=\"color:blue\">Virtual Worker: When Chat-bot meets RPA-bot</span>\n虚拟员工: 贷款填表申请审批一条龙自动化流程 (Mortgage loan application automation)\nSynchronous processing when triggering RPA-Bot", "# Library/Function to use operating system's shell script command, e.g. bash, echo, cd, pwd, etc\nimport subprocess, time\n\n# Funciton to trigger RPA-Bot (TagUI script: mortgage loan application automation) from VA-Bot (python script)\n# Trigger RPA-Bot [ Synchronous ]\n# def didi_invoke_rpa_bot(rpa_bot_file, rpa_bot = 'reference/S-IPA-Workshop/TagUI-S-IPA/src/tagui'):\ndef didi_invoke_rpa_bot(rpa_bot_file, rpa_bot = '../reference/S-IPA-Workshop/TagUI-S-IPA/src/tagui'):\n\n# Invoke RPA-Bot script\n print('[ W I P ] In progress to invoke RPA-Bot using command: \\n{}'.format(\n 'bash' + ' ' + rpa_bot + ' ' + rpa_bot_file))\n start = time.time()\n return_code = subprocess.call(['bash', rpa_bot, rpa_bot_file])\n end = time.time()\n if return_code == 0:\n print('[ Sync OK ] RPA-Bot succeeded! [ Return Code : {} ]'.format(return_code))\n else:\n print('[ ERROR ] RPA-Bot failed! [ Return Code : {} ]'.format(return_code))\n\n return return_code, int(round(end - start, 0)) # return_code & time_spent in seconds\n\n# Uncomment below lines for an agile demo outside Chat-bot:\n# rpa_bot_file = '../reference/S-IPA-Workshop/workshop2/KIE-Loan-Application-WeChat/VA-KIE-Loan-Application.txt'\n# return_code = didi_invoke_rpa_bot(rpa_bot_file)", "Asynchronous processing when triggering RPA-Bot", "# Trigger RPA-Bot [ Asynchronous ]\n# http://docs.dask.org/en/latest/_downloads/daskcheatsheet.pdf\nfrom dask.distributed import Client\ndef didi_invoke_rpa_bot_async(rpa_bot_file):\n client = Client(processes=False)\n ipa_task = client.submit(didi_invoke_rpa_bot, rpa_bot_file)\n ipa_task.add_done_callback(didi_invoke_rpa_bot_async_upon_completion)\n return 0, 0 # Dummy return. Actual result is returned by function didi_invoke_rpa_bot_async_upon_completion(ipa_task)\n\nfrom tornado import gen \n# https://stackoverflow.com/questions/40477518/how-to-get-the-result-of-a-future-in-a-callback\n@gen.coroutine\ndef didi_invoke_rpa_bot_async_upon_completion(ipa_task):\n print(u'[ Terminal Info ] didi_invoke_rpa_bot_async(rpa_bot_file) [ upon_completion ]')\n return_code, time_spent = ipa_task.result()\n print(return_code)\n print(time_spent)\n \n # Send confirmation message upon triggering RPA-Bot \n# itchat.send(u'[ Async OK ] IPA Command completed !\\n[ Time Spent : %s seconds ]\\n %s' % (time_spent, parm_msg['Text']), parm_msg['FromUserName'])\n itchat.send(u'[ Async OK ] IPA Command completed !\\n[ Time Spent : %s seconds ]' % (time_spent), parm_msg['FromUserName']) # parm_msg['Text'] can be in-sync due to new coming message.\n# return return_code, time_spent # No return needed. No pace to hold the info\n\n# Uncomment below lines for an agile demo outside Chat-bot:\n# rpa_bot_file = '../reference/S-IPA-Workshop/workshop2/KIE-Loan-Application-WeChat/VA-KIE-Loan-Application.txt'\n# return_code = didi_invoke_rpa_bot_async(rpa_bot_file)\n\nprint('[ Start of IPA-Bot ] Continue other tasks in main program...\\n...\\n')", "<span style=\"color:blue\">Wrap RPA-Bot into Functions() for conversational virtual assistant (VA):</span>\nReuse above defined Functions().\n虚拟员工: 文字指令交互(Conversational automation using text/message command)", "parm_msg = {} # Define a global variable to hold current msg\n\n# Define \"keywords intention command -> automation action\" lookup to invoke RPA-Bot process automation functions\nparm_bot_intention_action = {\n '#apply_loan': '../reference/S-IPA-Workshop/workshop2/KIE-Loan-Application-WeChat/VA-KIE-Loan-Application.txt'\n , '#ocr_invoice': '../reference/S-IPA-Workshop/workshop2/KIE-Loan-Application-WeChat/VA-KIE-Loan-Application.txt'\n , '#check_Application': '../reference/S-IPA-Workshop/workshop2/KIE-Loan-Application-WeChat/VA-KIE-Loan-Application.txt'\n , '#hi_everyone_welcome_to_see_you_here_in_the_process_automation_course': '../reference/S-IPA-Workshop/workshop2/KIE-Loan-Application-WeChat/VA-KIE-Loan-Application.txt'\n}\n", "Retrieve rpa_bot_file based on received Chat-Bot command", "# Retrieve rpa_bot_file based on received Chat-Bot command\ndef didi_retrieve_rpa_bot_file(chat_bot_command):\n print('[ W I P ] Retrieve rpa_bot_file based on received Chat-Bot command : {} -> {}'.format(\n chat_bot_command, chat_bot_command.lower()))\n \n if chat_bot_command.lower() in parm_bot_intention_action.keys():\n return parm_bot_intention_action[chat_bot_command.lower()]\n else:\n print('[ ERROR ] Command not found!')\n return None\n\n# Uncomment below lines for an agile demo outside Chat-bot:\n# didi_retrieve_rpa_bot_file('#apply_loan')\n\n# Uncomment below lines for an agile demo outside Chat-bot:\n# didi_retrieve_rpa_bot_file('#Apply_Loan')\n\n# Uncomment below lines for an agile demo outside Chat-bot:\n# didi_retrieve_rpa_bot_file('#approve_loan')", "虚拟员工: 语音指令交互(Conversational automation using speech/voice command)\n<span style=\"color:blue\">Use local AI module in native forms</span> for Speech Recognition: Speech-to-Text\n导入需要用到的一些功能程序库: Local AI Module Speech-to-Text", "# Local AI Module for Speech Synthesis: Speech-to-Text\n\n# Install library into computer storage:\n# !pip install SpeechRecognition\n\n# !pip install pocketsphinx\n\n\n# Load library into computer memory:\nimport speech_recognition as sr", "IF !pip install pocketsphinx failed, THEN: sudo apt-get install python python-dev python-pip build-essential swig libpulse-dev\nhttps://stackoverflow.com/questions/36523705/python-pocketsphinx-requesterror-missing-pocketsphinx-module-ensure-that-pocke\nSupported Languages\nhttps://github.com/Uberi/speech_recognition/blob/master/reference/pocketsphinx.rst#installing-other-languages.\nBy default, SpeechRecognition's Sphinx functionality supports only US English. Additional language packs are also available:\n* English (Default support) : en-US\n* International French : fr-FR\n* Mandarin Chinese : zh-CN\n* Italian : it-IT\nUtility function to convert mp3 file to 'wav / flac' audio file type:", "# Flag to indicate the environment to run this program:\n\n# Uncomment to run the code on Google Cloud Platform\n# parm_runtime_env_GCP = True\n\n# Uncomment to run the code in local machine\nparm_runtime_env_GCP = False\n\nimport subprocess\n\n# Utility function to convert mp3 file to target GCP audio file type:\n# audio_type = ['flac', 'wav']\n# audio_file_input = msg['FileName']\n\n# Running Speech API\ndef didi_mp3_audio_conversion(audio_file_input, audio_type='flac'):\n audio_file_output = str(audio_file_input) + '.' + str(audio_type)\n \n # convert mp3 file to target GCP audio file:\n\n# remove audio_file_output, if exists\n retcode = subprocess.call(['rm', audio_file_output])\n \n if parm_runtime_env_GCP: # using Datalab in Google Cloud Platform\n # GCP: use avconv to convert audio\n retcode = subprocess.call(['avconv', '-i', audio_file_input, '-ac', '1', audio_file_output])\n else: # using an iss-vm Virtual Machine, or local machine\n # VM : use ffmpeg to convert audio\n retcode = subprocess.call(['ffmpeg', '-i', audio_file_input, '-ac', '1', audio_file_output])\n \n if retcode == 0:\n print('[ O K ] Converted audio file for API: %s' % audio_file_output)\n else:\n print('[ ERROR ] Function: didi_mp3_audio_conversion() Return Code is : {}'.format(retcode))\n\n return audio_file_output # return file name string only\n\n# convertion for files not in wav or flac format:\n# AUDIO_FILE = didi_mp3_audio_conversion(\"reference/S-IPA-welcome.mp3\")\n# AUDIO_FILE = didi_mp3_audio_conversion(\"reference/S-IPA-welcome.mp3\", 'wav')\n# AUDIO_FILE = didi_mp3_audio_conversion(\"reference/text2speech.mp3\")\n# AUDIO_FILE = didi_mp3_audio_conversion(\"reference/text2speech.mp3\", 'wav')", "Calling Local AI Module: speech_recognition.Recognizer().recognize_sphinx()", "# Running Local AI Module Speech-to-Text\ndef didi_speech2text_local(AUDIO_FILE, didi_language_code='en-US'):\n # Python 2\n\n # use the audio file as the audio source\n r = sr.Recognizer()\n with sr.AudioFile(AUDIO_FILE) as source:\n audio = r.record(source) # read the entire audio file\n \n transcription = ''\n # recognize speech using Sphinx\n try:\n transcription = r.recognize_sphinx(audio, language=didi_language_code)\n print(\"[ Terminal Info ] Sphinx thinks you said : \\'{}\\'.\".format(transcription))\n except sr.UnknownValueError:\n print(\"[ Terminal Info ] Sphinx could not understand audio\")\n except sr.RequestError as e:\n print(\"[ Terminal Info ] Sphinx error; {0}\".format(e))\n \n return transcription\n\n\n# Uncomment below lines for an agile demo outside Chat-bot:\n# transcription = didi_speech2text_local(didi_mp3_audio_conversion(\"reference/S-IPA-welcome.mp3\"))\n\n# Uncomment below lines for an agile demo outside Chat-bot:\n# transcription = didi_speech2text_local(\"reference/S-IPA-welcome.mp3.flac\")", "Fuzzy match from 'transcribed audio command' to predefined 'chat_bot_command'\nAutomatically create a new lookup, by converting text-based intention command to voice-based intention command.\nExample: from '#apply_loan' to 'voice command apply loan'", "# import json # Prints the nicely formatted dictionary\n# print(json.dumps(parm_bot_intention_action, indent=4, sort_keys=True))\n\nimport re\nparm_bot_intention_action_fuzzy_match = {}\nfor intention, action in parm_bot_intention_action.items():\n# print(intention)\n intention_fuzzy_match = \" \".join(re.split('#|_', intention.replace('#', 'voice_command_')))\n# print(action)\n parm_bot_intention_action_fuzzy_match[intention_fuzzy_match] = action\n\n# print(json.dumps(parm_bot_intention_action_fuzzy_match, indent=4, sort_keys=True))\n# print(parm_bot_intention_action_fuzzy_match)", "Fuzzy match function: Compare similarity between two text strings", "# Compare similarity between two text strings\ndef did_fuzzy_match_score(string1, string2):\n print('\\n[ Inside FUNCTION ] did_fuzzy_match_score')\n string1_list = string1.lower().split() # split by space\n string2_list = string2.lower().split() # split by space \n\n print('string1_list : ', string1_list)\n print('string2_list : ', string2_list)\n \n # words in common\n common_words = set(string1_list)&set(string2_list)\n# print('len(common_words) : ', len(common_words))\n\n # totoal unique words\n unique_words = set(string1_list + string2_list)\n# print('len(unique_words) : ', len(unique_words))\n \n jaccard_similarity = float(len(common_words) / len(unique_words))\n\n print('jaccard_similarity : {0:.3f}'.format(jaccard_similarity))\n \n return jaccard_similarity\n\n# Uncomment below lines for an agile demo outside Chat-bot:\n# did_fuzzy_match_score('run DIDI voice command apply loan', 'voice command apply loan')", "Retrieve rpa_bot_file based on received Chat-Bot command ( fuzzy match for voice/speech2text )", "# Retrieve rpa_bot_file based on received Chat-Bot command ( fuzzy match for voice/speech2text )\ndef didi_retrieve_rpa_bot_file_fuzzy_match(speech2text_chat_bot_command, didi_confidence_threshold=0.8):\n print('\\n[ Inside FUNCTION ] didi_retrieve_rpa_bot_file_fuzzy_match')\n matched_intention = [0.0, {}] # a lis to store intention_command of highest jaccard_similarity\n\n for intention, action in parm_bot_intention_action_fuzzy_match.items():\n# print('\\nintention : ', intention)\n# print('action : ', action)\n fuzzy_match_score_current = did_fuzzy_match_score(intention, speech2text_chat_bot_command)\n# print('jaccard_similarity_score_current : ', jaccard_similarity_score_current)\n if fuzzy_match_score_current > matched_intention[0]:\n matched_intention[0] = fuzzy_match_score_current\n matched_intention[1] = {intention : action}\n# print('matched_intention : ', matched_intention)\n \n print('\\n[ Finale ] matched_intention : ', matched_intention)\n \n if matched_intention[0] < didi_confidence_threshold: # not confident enough about fuzzy matched voice command\n return None\n else: # confident enough, thus return predefined rpa_bot_file\n return str(list(matched_intention[1].values())[0]) \n\n# Uncomment below lines for an agile demo outside Chat-bot:\n# parm_voice_command_confidence_threshold = 0.6 # Control of asynchronous or synchronous processing when triggering RPA-Bot\n# action_rpa_bot_file = didi_retrieve_rpa_bot_file_fuzzy_match('run DIDI voice command apply loan', parm_voice_command_confidence_threshold)\n# print('\\n[ Process Automation ] rpa_bot_file : ', action_rpa_bot_file)", "Control Parm", "# Control of asynchronous or synchronous processing when triggering RPA-Bot\nparm_asynchronous_process = True\n\n# Control of asynchronous or synchronous processing when triggering RPA-Bot\nparm_voice_command_confidence_threshold = 0.05 # low value for demo only\n", "<span style=\"color:blue\">Start interactive conversational virtual assistant (VA):</span>\nImport ItChat, etc. 导入需要用到的一些功能程序库:", "import itchat\nfrom itchat.content import *", "Log in using QR code image / 用微信App扫QR码图片来自动登录", "# Running in Jupyther Notebook:\n# itchat.auto_login(hotReload=True) # hotReload=True: 退出程序后暂存登陆状态。即使程序关闭,一定时间内重新开启也可以不用重新扫码。\n# or\n# itchat.auto_login(enableCmdQR=-2) # enableCmdQR=-2: Jupyter Notebook 命令行显示QR图片\n\n# Running in Terminal:\nitchat.auto_login(enableCmdQR=2) # enableCmdQR=2: 命令行显示QR图片 ", "虚拟员工: 文字指令交互(Conversational automation using text/message command)", "# Trigger RPA-Bot when command received / 如果收到[TEXT]的信息:\n@itchat.msg_register([TEXT]) # 文字\ndef didi_ipa_text_command(msg):\n global parm_msg\n parm_msg = msg\n if msg['Text'][0] == '#':\n # Retrieve rpa_bot_file based on received Chat-Bot command\n rpa_bot_file = didi_retrieve_rpa_bot_file( msg['Text'])\n \n if rpa_bot_file == None: # input command / rpa_bot_file NOT FOUND!\n print(u'[ Terminal Info ] RPA-Bot [ ERROR ] Command not found : [ %s ] %s From: %s' \n % (msg['Type'], msg['Text'], msg['FromUserName']))\n itchat.send(u'RPA-Bot [ ERROR ] Command not found : \\n[ %s ]\\n%s' % (msg['Type'], msg['Text']), msg['FromUserName'])\n else:\n print(u'[ Terminal Info ] RPA-Bot [ W I P ] Command : [ %s ] %s From: %s' \n % (msg['Type'], msg['Text'], msg['FromUserName']))\n print(u'[ Terminal Info ] RPA-Bot [ W I P ] File : %s' % (rpa_bot_file))\n \n if parm_asynchronous_process: # Don't wait for RPA-Bot completion \n # Send 'work in progress' message triggering RPA-Bot\n itchat.send(u'[ Async WIP ] IPA Command triggered: \\n[ %s ]\\n%s' % (msg['Type'], msg['Text']), msg['FromUserName'])\n \n # Trigger RPA-Bot [ Asynchronous ]\n didi_invoke_rpa_bot_async(rpa_bot_file) # No return of return_code, time_spent\n else: # Wait for RPA-Bot completion \n # Send 'work in progress' message triggering RPA-Bot\n itchat.send(u'[ Sync WIP ] IPA Command triggered: \\n[ %s ]\\n%s' % (msg['Type'], msg['Text']), msg['FromUserName'])\n \n # Trigger RPA-Bot [ Synchronously ]\n return_code, time_spent = didi_invoke_rpa_bot(rpa_bot_file)\n print(u'[ Terminal Info ] didi_invoke_rpa_bot(rpa_bot_file) [ Return Code : %s ]' % (return_code))\n \n if return_code == 0:\n # Send confirmation message upon RPA-Bot completion\n itchat.send(u'[ Sync OK ] IPA Command completed : \\n[ %s ]\\n%s\\n[ Time Spent : %s seconds ]' % (msg['Type'], msg['Text'], time_spent), msg['FromUserName'])\n else:\n # Error when running RPA-Bot task\n itchat.send(u'[ Sync ERROR] [ Return Code : %s ] IPA Command failed : \\n[ %s ]\\n%s\\n[ Time Spent : %s seconds ]' % (return_code, msg['Type'], msg['Text'], time_spent), msg['FromUserName'])\n \n else:\n print(u'[ Terminal Info ] Thank you! 谢谢亲[嘴唇]我已收到 I received: [ %s ] %s From: %s' \n % (msg['Type'], msg['Text'], msg['FromUserName']))\n itchat.send(u'Thank you! 谢谢亲[嘴唇]我已收到\\nI received:\\n[ %s ]\\n%s' % (msg['Type'], msg['Text']), msg['FromUserName'])\n", "虚拟员工: 语音指令交互(Conversational automation using speech/voice command)", "# 1. 语音转换成消息文字 (Speech recognition: voice to text)\n\n@itchat.msg_register([RECORDING], isGroupChat=True)\n@itchat.msg_register([RECORDING])\ndef download_files(msg):\n msg.download(msg.fileName)\n print('\\nDownloaded audio file name is: %s' % msg['FileName'])\n \n \n ###########################################################################################################\n # call audio analysis Local AI Sphinx #\n ###########################################################################################################\n \n audio_analysis_reply = u'[ Audio Analysis 音频处理结果 ]\\n'\n\n # Voice to Text:\n audio_analysis_reply += u'\\n[ Voice -> Text 语音识别 ]\\n'\n response = didi_speech2text_local(didi_mp3_audio_conversion(msg['FileName']), 'en-US')\n \n rpa_bot_file = didi_retrieve_rpa_bot_file_fuzzy_match(response, parm_voice_command_confidence_threshold)\n \n if rpa_bot_file == None: # input command / rpa_bot_file NOT FOUND!\n print(u'[ Terminal Info ] Not Confident IPA Command\\n') \n audio_analysis_reply += str(response) + u'\\n( Not Confident IPA Command )\\n'\n else:\n print(u'[ Terminal Info ] RPA-Bot [ W I P ] Command : %s' % (response))\n print(u'[ Terminal Info ] RPA-Bot [ W I P ] File : %s' % (rpa_bot_file))\n \n if parm_asynchronous_process: # Don't wait for RPA-Bot completion \n # Send 'work in progress' message triggering RPA-Bot\n audio_analysis_reply += (u'[ Async WIP ] IPA Command triggered\\n')\n \n # Trigger RPA-Bot [ Asynchronous ]\n didi_invoke_rpa_bot_async(rpa_bot_file) # No return of return_code, time_spent\n else: # Wait for RPA-Bot completion \n # Send 'work in progress' message triggering RPA-Bot\n audio_analysis_reply += (u'[ Sync WIP ] IPA Command triggered\\n')\n \n # Trigger RPA-Bot [ Synchronously ]\n return_code, time_spent = didi_invoke_rpa_bot(rpa_bot_file)\n print(u'[ Terminal Info ] didi_invoke_rpa_bot(rpa_bot_file) [ Return Code : %s ]' % (return_code))\n \n if return_code == 0:\n # Send confirmation message upon RPA-Bot completion\n audio_analysis_reply += (u'[ Sync OK] [ Return Code : %s ] IPA Command completed !\\n[ Time Spent : %s seconds ]' % (return_code, time_spent))\n else:\n # Error when running RPA-Bot task\n audio_analysis_reply += (u'[ Sync ERROR] [ Return Code : %s ] IPA Command failed !\\n[ Time Spent : %s seconds ]' % (return_code, time_spent))\n \n return audio_analysis_reply", "", "itchat.run()", "", "# interupt kernel, then logout\n# itchat.logout() # 安全退出", "恭喜您!已经完成了:\n第六课:交互式虚拟助手的智能应用\nLesson 6: Interactive Conversatioinal Virtual Assistant Applications / Intelligent Process Automations\n\n虚拟员工: 贷款填表申请审批一条龙自动化流程 (Virtual Worker: When Chat-bot meets RPA-bot for mortgage loan application automation) \n虚拟员工: 文字指令交互(Conversational automation using text/message command) \n虚拟员工: 语音指令交互(Conversational automation using speech/voice command) \n虚拟员工: 多种语言交互(Conversational automation with multiple languages)\n\n<img src='../reference/WeChat_SamGu_QR.png' width=80% style=\"float: left;\">\n\n<span style=\"color:blue\">Exercise / Workshop Enhancement</span> Use Cloud AI APIs\n<span style=\"color:blue\">Install the client library</span> for 虚拟员工: 语音指令交互(Conversational automation using speech/voice command)\n[ Hints ]", "# !pip install --upgrade google-cloud-speech\n\n# Imports the Google Cloud client library\n# from google.cloud import speech\n# from google.cloud.speech import enums\n# from google.cloud.speech import types\n\n# !pip install --upgrade google-cloud-texttospeech\n\n# Imports the Google Cloud client library\n# from google.cloud import texttospeech", "<span style=\"color:blue\">Exercise / Workshop Enhancement</span> Use Cloud AI APIs\n<span style=\"color:blue\">Install the client library</span> for 虚拟员工: 多种语言交互(Conversational automation with multiple languages)\n[ Hints ]", "# !pip install --upgrade google-cloud-translate\n\n# Imports the Google Cloud client library\n# from google.cloud import translate", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
georgetown-analytics/yelp-classification
.ipynb_checkpoints/Mongo_Connect-checkpoint.ipynb
mit
[ "__| __|_ )\n _| ( / Amazon Linux AMI\n ___|\\___|___|", "from pymongo import MongoClient\nfrom datetime import datetime\nimport json", "You have to change this variable each time the EC2 server stops or restarts. Please email/text me to get the new IP address.", "ip = '54.236.23.221'", "Create the connection to the MongoDB server. The first argument is the IP we've supplied above and the second is the port (TCP) through which we'll be talking to the EC2 server and the MongoDB instance running inside it.", "conn = MongoClient(ip, 27017)", "Take a look at the databases available in our MongoDB instance", "conn.database_names()\n\ndb = conn.get_database('cleaned_data')", "Print the collection names", "db.collection_names()", "Let's grab a a subset reviews from the academic reviews collection. Suppose we want a random set of 5000, all from after 2010, from each city in our dataset.", "collection = db.get_collection('academic_reviews')\n\n#I cheated and just had a list of all the states. \n#You should try to find a unique list of all the states from mongoDB as an exercise.\nstates = [u'OH', u'NC', u'WI', u'IL', u'AZ', u'NV']", "First, I'm going to take a look at what one of the reviews looks like. I totally could have done something wrong earlier and the output is pure garbage. This is a good sanity check to make.", "collection.find()[0]", "Sweet, this is pretty much what we were expecting. Let's pull out the date field from this entry. We're going to filter on this in a second. Depending on its type, we're going to need to develop different strategies in constructing the logical statements that filter for the date.", "print collection.find()[0]['date']\nprint type(collection.find()[0]['date'])", "Dang it's unicode. Unicode is a pain in the ass to deal with, it's some Python specific format. Let's try converting it to a more usable Python format (datetime). We care about the relative difference between the date variable. Doing this with a string doesn't make sense to a computer so we have to transform it into a quantitative measure of time.", "string_year = collection.find()[0]['date'][0:4]\nyear = datetime.strptime(string_year, '%Y')\nyear", "Note that the datetime above is given as January-1st, 2014. We only gave it a year variable so it just defaults to the first day of that year. That's all good though, we just want stuff after 2010, we just define the beginning of 2010 to be January-1st 2010.", "threshold_year = datetime.strptime('2010', '%Y')", "Running the below code is going to take a little while. But it's essentially doing the following:\n For each review in the reviews database: \n If the review comes from one of our states: \n Check to see if the review was made after 2010: \n If it did, append it to the overall reviews dictionary. \n If it didn't, proceed to the next review.", "#reviews_dict = {}\nnum_reviews = 50000\n\nfor obj in collection.find():\n if obj['state'] == 'IL':\n try:\n if len(reviews_dict[obj['state']]) > num_reviews:\n continue\n except KeyError:\n pass\n if datetime.strptime(obj['date'][0:4], '%Y') >= threshold_year:\n del obj['_id']\n try:\n reviews_dict[obj['state']].append(obj)\n except KeyError:\n reviews_dict[obj['state']]=[obj]\n else:\n pass\n ", "So the new dictionary we created is structured with each state being a key and each entry being a list of reviews. Let's take a look at what Ohio looks like:", "reviews_dict['OH'][0:50]", "It's good practice to save whatever data you're using in a more permanent location if you plan on using it again. That way, we don't have to load up the EC2 server and wait for our local machines to run the above filtering process.", "with open('cleaned_reviews_states_2010.json', 'w+') as outfile:\n json.dump(reviews_dict, outfile)", "Congratulations! You just finished downloading and filtering data from MongoDB as hosted on an EC2 instance" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Ttl/scikit-rf
doc/source/examples/networktheory/Properties of Rectangular Waveguides.ipynb
bsd-3-clause
[ "Properties of Rectangular Waveguide\nIntroduction\nThis example demonstrates how to use scikit-rf to calculate some properties of rectangular waveguide. For more information regarding the theoretical basis for these calculations, see the References.\nObject Creation\nThis first section imports neccesary modules and creates several RectangularWaveguide objects for some standard waveguide bands.", "\n%matplotlib inline\nimport skrf as rf \nrf.stylely()\n\n# imports \n\nfrom scipy.constants import mil,c\nfrom skrf.media import RectangularWaveguide, Freespace\nfrom skrf.frequency import Frequency\n\nimport matplotlib as mpl\n\n# plot formating\nmpl.rcParams['lines.linewidth'] = 2\n\n# create frequency objects for standard bands\nf_wr5p1 = Frequency(140,220,1001, 'ghz')\nf_wr3p4 = Frequency(220,330,1001, 'ghz')\nf_wr2p2 = Frequency(330,500,1001, 'ghz')\nf_wr1p5 = Frequency(500,750,1001, 'ghz')\nf_wr1 = Frequency(750,1100,1001, 'ghz')\n\n# create rectangular waveguide objects \nwr5p1 = RectangularWaveguide(f_wr5p1.copy(), a=51*mil, b=25.5*mil, rho = 'au')\nwr3p4 = RectangularWaveguide(f_wr3p4.copy(), a=34*mil, b=17*mil, rho = 'au')\nwr2p2 = RectangularWaveguide(f_wr2p2.copy(), a=22*mil, b=11*mil, rho = 'au')\nwr1p5 = RectangularWaveguide(f_wr1p5.copy(), a=15*mil, b=7.5*mil, rho = 'au')\nwr1 = RectangularWaveguide(f_wr1.copy(), a=10*mil, b=5*mil, rho = 'au')\n\n# add names to waveguide objects for use in plot legends\nwr5p1.name = 'WR-5.1'\nwr3p4.name = 'WR-3.4'\nwr2p2.name = 'WR-2.2'\nwr1p5.name = 'WR-1.5'\nwr1.name = 'WR-1.0'\n\n# create a list to iterate through\nwg_list = [wr5p1, wr3p4,wr2p2,wr1p5,wr1]\n\n# creat a freespace object too\nfreespace = Freespace(Frequency(125,1100, 1001))\nfreespace.name = 'Free Space'", "Conductor Loss", "from pylab import * \n\nfor wg in wg_list:\n wg.frequency.plot(rf.np_2_db(wg.alpha), label=wg.name )\n\nlegend() \nxlabel('Frequency(GHz)')\nylabel('Loss (dB/m)')\ntitle('Loss in Rectangular Waveguide (Au)');\nxlim(100,1300)\n\nresistivity_list = linspace(1,10,5)*1e-8 # ohm meter \nfor rho in resistivity_list:\n wg = RectangularWaveguide(f_wr1.copy(), a=10*mil, b=5*mil, \n rho = rho)\n wg.frequency.plot(rf.np_2_db(wg.alpha),label=r'$ \\rho $=%.e$ \\Omega m$'%rho )\n\nlegend() \n#ylim(.0,20)\nxlabel('Frequency(GHz)')\nylabel('Loss (dB/m)')\ntitle('Loss vs. Resistivity in\\nWR-1.0 Rectangular Waveguide');", "Phase Velocity", "for wg in wg_list:\n wg.frequency.plot(100*wg.v_p.real/c, label=wg.name )\n\nlegend() \nylim(50,200)\nxlabel('Frequency(GHz)')\nylabel('Phase Velocity (\\%c)')\ntitle('Phase Veclocity in Rectangular Waveguide');\n\nfor wg in wg_list:\n plt.plot(wg.frequency.f_scaled[1:], \n 100/c*diff(wg.frequency.w)/diff(wg.beta), \n label=wg.name )\n \nlegend() \nylim(50,100)\nxlabel('Frequency(GHz)')\nylabel('Group Velocity (\\%c)')\ntitle('Phase Veclocity in Rectangular Waveguide');", "Propagation Constant", "for wg in wg_list+[freespace]:\n wg.frequency.plot(wg.beta, label=wg.name )\n \nlegend() \nxlabel('Frequency(GHz)')\nylabel('Propagation Constant (rad/m)')\ntitle('Propagation Constant \\nin Rectangular Waveguide');\nsemilogy();", "References\n\n[1] http://www.microwaves101.com/encyclopedia/waveguidemath.cfm\n[2] http://en.wikipedia.org/wiki/Waveguide_(electromagnetism)\n[3] R. F. Harrington, Time-Harmonic Electromagnetic Fields (IEEE Press Series on Electromagnetic Wave Theory). Wiley-IEEE Press, 2001.\n[4] http://www.ece.rutgers.edu/~orfanidi/ewa (see Chapter 9)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google/starthinker
colabs/email_cm_to_bigquery.ipynb
apache-2.0
[ "CM360 Report Emailed To BigQuery\nPulls a CM Report from a gMail powered email account into BigQuery.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.", "!pip install git+https://github.com/google/starthinker\n", "2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.", "from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n", "3. Enter CM360 Report Emailed To BigQuery Recipe Parameters\n\nThe person executing this recipe must be the recipient of the email.\nSchedule a CM report to be sent to ****.\nOr set up a redirect rule to forward a report you already receive.\nThe report must be sent as an attachment.\nEnsure this recipe runs after the report is email daily.\nGive a regular expression to match the email subject.\nConfigure the destination in BigQuery to write the data.\nModify the values below for your use case, can be done multiple times, then click play.", "FIELDS = {\n 'auth_read':'user', # Credentials used for reading data.\n 'email':'', # Email address report was sent to.\n 'subject':'.*', # Regular expression to match subject. Double escape backslashes.\n 'dataset':'', # Existing dataset in BigQuery.\n 'table':'', # Name of table to be written to.\n 'is_incremental_load':False, # Append report data to table based on date column, de-duplicates.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n", "4. Execute CM360 Report Emailed To BigQuery\nThis does NOT need to be modified unless you are changing the recipe, click play.", "from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'email':{\n 'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},\n 'read':{\n 'from':'noreply-cm@google.com',\n 'to':{'field':{'name':'email','kind':'string','order':1,'default':'','description':'Email address report was sent to.'}},\n 'subject':{'field':{'name':'subject','kind':'string','order':2,'default':'.*','description':'Regular expression to match subject. Double escape backslashes.'}},\n 'attachment':'.*'\n },\n 'write':{\n 'bigquery':{\n 'dataset':{'field':{'name':'dataset','kind':'string','order':3,'default':'','description':'Existing dataset in BigQuery.'}},\n 'table':{'field':{'name':'table','kind':'string','order':4,'default':'','description':'Name of table to be written to.'}},\n 'header':True,\n 'is_incremental_load':{'field':{'name':'is_incremental_load','kind':'boolean','order':6,'default':False,'description':'Append report data to table based on date column, de-duplicates.'}}\n }\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ngcm/summer-academy-2017-basics
basics_B/ObjectOriented/solutions/Classes for Basics B.ipynb
mit
[ "import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline", "<font color='mediumblue'> What are classes?\n\nA way of organising your code\nData is inherently linked to the things you can do with it.\n\nPros\n\nCan do everything you can do without classes, but idea is to make it easier\nClasses encourage code reuse through a concept called \"inheritance\" - we will discuss later.\n\nCons\n\nCan make your code more complicated, and without careful thinking, harder to maintain.\nMore work for the developer.\n\n<font color='mediumblue'> Start by defining some terminology - Classes vs Objects vs Instances\n\nOften used interchangably but they are different concepts.\nA Class is like a template - you could consider the class \"Car\"\nAn object is a particular occurence of a class - so, for example, you could have \"Ford Mondeo\", \"Vauxhall Astra\", \"Lamborghini Gallardo\" be objects of type \"Car\".\nAn instance is a unique single object.\n\n<font color='mediumblue'> Where are classes used in Python? Everywhere!\nYou've been using classes all of the time, without even knowing it. Everything in Python is an object. You have some data (number, text, etc.)with some methods (or functions) which are internal to the object, and which you can use on that data. Lets look at a few examples...", "a = 10.1\n\ntype(a)", "How can I see what methods an object of type float has?", "print(dir(a)) # Show all of the methods of a\n\na.is_integer()", "<font color='midnightblue'> Aside - What do all those underscores mean?\nThey're hidden methods - we'll talk more about these later.\n<font color='mediumblue'> Creating a class\nDefine some key things:\n* self - 'self' is a special type of variable which can be used inside the class to refer to itself.\n* Methods - a function which is part of a class, and which have access to data held by a class.\n* A constructor - a special method which is called when you create an instance of a class. In Python this function must be called \"__init__\"\n* A destructor - a special method which is called when you destroy an instance of a class.\nAside: If you're a C++/Java programmer, 'self' is exactly equivalent to 'this', but functions must have self as an argument, as it is passed in implicitly as the first argument of any method call in Python.", "# Create a class by using class keyword followed by name.\nclass MyClass:\n # The 'self' variable ALWAYS needs to be the first variable given to any class method.\n def __init__(self, message):\n # Here we create a new variable inside \"self\" called \"mess\" and save the argument \"message\"\n # passed from the constructor to it.\n self.mess = message\n \n def say(self):\n print(self.mess)\n \n \n # Don't normally need to write a destructor - one is created by Python automatically. However we do it here\n # just to show you that it can be done:\n def __del__(self):\n print(\"Deleting object of type MyClass\")", "<font color='mediumblue'> Using the class\nUse the same syntax as we use to call a function, BUT the arguments get passed in to the \"__init__\" function. Note that you ignore the self object, as Python sorts this out.", "a = MyClass(\"Hello\")\nprint(a.mess)", "How do I access data stored in the class? with the \".\", followed by the name.", "# But, we also defined a method called \"say\" which does the same thing:\na.say()", "What happens though if we reuse the variable name 'a'?\nAside:\n* Your computer has Random Access Memory (RAM) which is used to store information.\n* Whenever, in a programming language, you tell the language to store something, you effectively create a 'box' of memory to put those values in.\n* The location of the specific 'box' is known as a 'memory address'\n* You can see the memory address of a Python object quite easily:", "print(a)\n", "So, what happens if we either choose to store something else under the name 'a', or tell Python to delte it?", "del a\na = MyClass('Hello')\na = 2", "Why bother? This can be achieved without classes very easily:", "mess = \"Hello\"\n\ndef say(mess):\n print(mess)\n \nsay(mess)", "Need a better example!\nHow about a Simulation class?\n* Write once, but can take different parameters.\n* Can include data analysis methods as well\n<font color='mediumblue'> Consider a 1-D box of some length:\nWhat information does it need to know about itself?\n* How big is the box?", "class Box:\n def __init__(self, length):\n self.length = length", "What we're going to try and do is add particles to the box, which have some properties:\n* An initial position.\n* An initial velocity\n$r(t + \\delta t) \\approx r(t) + v(t)\\delta t$", "class Particle:\n def __init__(self, r0, v0):\n \"\"\"\n r0 = initial position\n v0 = initial speed\n \"\"\"\n self.r = r0\n self.v = v0\n \n def step(self, dt, L):\n \"\"\"\n Move the particle\n dt = timestep\n L = length of the containing box\n \"\"\"\n self.r = self.r + self.v * dt\n \n if self.r >= L:\n self.r -= L\n elif self.r < 0:\n self.r += L\n ", "Lets just check this, if a Particle is in a box of length 10, has r0 = 0, v0=5, then after 1 step of length 3, the position should be at position 5:", "p = Particle(0, 5)\np.step(3, 10)\nprint(p.r)", "Lets add a place to store the particles to the box class, and add a method to add particles to the box:", "class Box:\n def __init__(self, length):\n self.length = length\n self.particles = []\n \n def add_particle(particle):\n self.particles.append(particle)", "<font color='mediumblue'> Now lets get you to do something...\nTasks (30-40 minutes):\n1) Add a method that calculates the average position of Particles in the box (Hint: you might have to think about what to do when there are no particles!)\n2) Add a method that makes all of the particles step forwards, and keep track of how much time has passed in the box class.\n3) Add a method which plots the current position of the particles in the box.\n4) Write a method that writes the current positions and velocities to a CSV file.\n5) Write a method that can load a CSV file of positions and velocities, create particles with these and then add them to the Box list of particles. (Hint: Look up the documentation for the module 'csv')", "class Box:\n def __init__(self, length):\n self.length = length\n self.particles = []\n self.t = 0\n \n def add_particle(self, particle):\n self.particles.append(particle)\n \n def step(self, dt):\n for particle in self.particles:\n particle.step(dt, self.length)\n \n def write(self, filename):\n f = open(filename, 'w')\n for particle in self.particles:\n f.write('{},{}\\n'.format(particle.r, particle.v))\n f.close()\n \n def plot(self):\n for particle in self.particles:\n plt.scatter(particle.r, 0)\n \n def load(self, filename):\n f = open(filename, 'r')\n csvfile = csv.reader(f)\n for position, velocity in csvfile:\n p = Particle(position, velocity)\n self.add_particle(p)\n \n\nb = Box(10)\n\n\nfor i in range(10):\n p = Particle(i/2, i/3)\n b.add_particle(p)\n \nb.write('test.csv')\n\n!cat test.csv", "<font color='mediumblue'> Class Properties\n\nProperties can be used to do interesting things\nSpecial functions as part of a class that we mark with a 'decorator' - '@property'\nLets adjust the class Particle we used to make its data members a property of the class. We also need to write a 'setter' method to set the data members.", "class Particle:\n def __init__(self, r0, v0):\n \"\"\"\n r0 = initial position\n v0 = initial speed\n \"\"\"\n self._r = r0\n self._v = v0\n \n def step(self, dt, L):\n \"\"\"\n Move the particle\n dt = timestep\n L = length of the containing box\n \"\"\"\n self._r = self._r + self._v * dt\n \n if self._r >= L:\n self._r -= L\n elif self._r < 0:\n self._r += L\n \n @property\n def r(self):\n return self._r\n \n @r.setter\n def r_setter(self, value):\n self._r = value\n \n @property\n def v(self):\n return self._v\n \n @r.setter\n def r_setter(self, value):\n self._v = value", "<font color='midnightblue'> Why bother? It looks the same when we use it!\n\nWell known in programming - 'an interface is a contract'\nYou might want to at some point rewrite a large portion of the underlying data - how it is stored for example.\nIf you do this without using properties to access the data, you then need to go through all code that uses this class and change it to use the new variable names.\n\n<font color='mediumblue'> Inheritance\n\nLast part of the course on Classes, but also one of the main reason for using classes!\nInheritance allows you to reuse parts of the code, but change some of the methods. Lets see how it might be useful...", "class SlowParticle(Particle):\n def __init__(self, r0, v0, slowing_factor):\n Particle.__init__(self, r0, v0)\n self.factor = slowing_factor\n \n def step(self, dt, L):\n \"\"\"\n Move the particle, but change so that if the particle bounces off of a wall,\n it slows down by 50%\n dt = timestep\n L = length of the containing box\n \"\"\"\n self._r = self._r + self._v * dt\n \n if self._r >= L:\n self._r -= L\n self._v /= factor\n elif self._r < 0:\n self._r += L\n self._v /= factor", "Here we have inherited most of the class Particle, and just changed the method 'step' to do something differently. Because we kept the properties the same, we can use this class everywhere that we could use Particle - our Box class can take a mixture of Particles and SlowParticles\n\n<font color='mediumblue'> Magic Methods:\nRemember earlier, when we did:", "a = 1.0\nprint(dir(a))", "Notice that there is a method \"__add__\" - we can define these special methods to allow our class to do things that you can ordinarily do with built in types.", "class Box:\n def __init__(self, length):\n self.length = length\n self.particles = []\n self.t = 0\n \n def __add__(self, other):\n \n if self.length == other.length:\n b = Box(self.length)\n \n for p in self.particles:\n b.add_particle(p)\n \n for p in other.particles:\n b.add_particle(p)\n \n return b\n else:\n return ValueError('To add two boxes they must be of the same length')\n \n def mean_position(self):\n l = np.sum([p.r for p in self.particles])/len(self.particles)\n return l\n \n def add_particle(self, particle):\n self.particles.append(particle)\n \n def step(self, dt):\n for particle in self.particles:\n particle.step(dt, self.length)\n \n def write(self, filename):\n f = open(filename, 'w')\n for particle in self.particles:\n f.write('{},{}\\n'.format(particle.r, particle.v))\n f.close()\n \n def plot(self):\n for particle in self.particles:\n plt.scatter(particle.r, 0)\n \n def load(self, filename):\n f = open(filename, 'r')\n csvfile = csv.reader(f)\n for position, velocity in csvfile:\n p = Particle(position, velocity)\n self.add_particle(p)\n \n def __repr__(self):\n if len(self.particles) == 1:\n return 'Box containing 1 particle'\n else:\n return 'Box containing {} particles'.format(len(self.particles))", "Now we've created an 'add' method, we can, create two boxes and add these together!", "a = Box(10)\na.add_particle(Particle(10, 10))\nb = Box(10)\nb.add_particle(Particle(15, 10))\nc = a + b\nprint(a)\nprint(b)\nprint(c)", "Looks good! But hang on...", "a.mean_position(), b.mean_position(), c.mean_position()\n\na.step(0.5)\n\na.mean_position(), b.mean_position(), c.mean_position()", "Why has the mean position of particles in Box C changed? Look at the memory address of the particles:", "a.particles, c.particles", "Boxes are pointing to the SAME particles!\nIf we don't want this to happen, we need to write a 'copy' constructor for the class - a function which knows how to create an identical copy of the particle! \nWe can do this by using the 'deepcopy' function in the 'copy' module, and redefine the particle and slow particle classes:", "import copy\n\nclass Particle:\n def __init__(self, r0, v0):\n \"\"\"\n r0 = initial position\n v0 = initial speed\n \"\"\"\n self.r = r0\n self.v = v0\n \n def step(self, dt, L):\n \"\"\"\n Move the particle\n dt = timestep\n L = length of the containing box\n \"\"\"\n self.r = self.r + self.v * dt\n \n if self.r >= L:\n self.r -= L\n elif self.r < 0:\n self.r += L\n \n def copy(self):\n return copy.deepcopy(self)", "Then, we should change the Box class's 'add' method, to use this copy operation rather than just append the child particles of the existing boxes:", "class Box:\n def __init__(self, length):\n self.length = length\n self.particles = []\n self.t = 0\n \n def __add__(self, other):\n \n if self.length == other.length:\n b = Box(self.length)\n \n for p in self.particles:\n b.add_particle(p)\n \n for p in other.particles:\n b.add_particle(p)\n \n return b\n else:\n return ValueError('To add two boxes they must be of the same length')\n \n def mean_position(self):\n l = np.sum([p.r for p in self.particles])/len(self.particles)\n return l\n \n def add_particle(self, particle):\n self.particles.append(particle.copy())\n \n def step(self, dt):\n for particle in self.particles:\n particle.step(dt, self.length)\n \n def write(self, filename):\n f = open(filename, 'w')\n for particle in self.particles:\n f.write('{},{}\\n'.format(particle.r, particle.v))\n f.close()\n \n def plot(self):\n for particle in self.particles:\n plt.scatter(particle.r, 0)\n \n def load(self, filename):\n f = open(filename, 'r')\n csvfile = csv.reader(f)\n for position, velocity, ptype in csvfile:\n p = Particle(position, velocity)\n self.add_particle(p)\n \n def __repr__(self):\n if len(self.particles) == 1:\n return 'Box containing 1 particle'\n else:\n return 'Box containing {} particles'.format(len(self.particles))\n\na = Box(10)\na.add_particle(Particle(10, 10))\nb = Box(10)\nb.add_particle(Particle(15, 10))\nc = a + b\nprint(a)\nprint(b)\nprint(c)\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fluxcapacitor/source.ml
jupyterhub.ml/notebooks/train_deploy/python3/python3_zscore/04_PredictModel.ipynb
apache-2.0
[ "Predict with Model\nView Config", "%%bash \n\npio init-model \\\n --model-server-url http://prediction-python3.community.pipeline.io \\\n --model-type python3 \\\n --model-namespace default \\\n --model-name python3_zscore \\\n --model-version v1 \\\n --model-path .", "Predict with Model (CLI)", "%%bash\n\npio predict \\\n --model-test-request-path ./data/test_request.json", "Predict with Model under Mini-Load (CLI)\nThis is a mini load test to provide instant feedback on relative performance.", "%%bash\n\npio predict_many \\\n --model-test-request-path ./data/test_request.json \\\n --num-iterations 5", "Predict with Model (REST)\nSetup Prediction Inputs", "import requests\n\nmodel_type = 'python3'\nmodel_namespace = 'default'\nmodel_name = 'python3_zscore'\nmodel_version = 'v1'\n\ndeploy_url = 'http://prediction-%s.community.pipeline.io/api/v1/model/predict/%s/%s/%s/%s' % (model_type, model_type, model_namespace, model_name, model_version)\nprint(deploy_url)\nwith open('./data/test_request.json', 'rb') as fh:\n model_input_binary = fh.read()\n\nresponse = requests.post(url=deploy_url,\n data=model_input_binary,\n timeout=30)\n\nprint(\"Success!\\n\\n%s\" % response.text)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
thomasantony/CarND-Projects
Exercises/Term1/TensorFlow-Tutorials/04_Save_Restore.ipynb
mit
[ "TensorFlow Tutorial #04\nSave & Restore\nby Magnus Erik Hvass Pedersen\n/ GitHub / Videos on YouTube\nIntroduction\nThis tutorial demonstrates how to save and restore the variables of a Neural Network. During optimization we save the variables of the neural network whenever its classification accuracy has improved on the validation-set. The optimization is aborted when there has been no improvement for 1000 iterations. We then reload the variables that performed best on the validation-set.\nThis strategy is called Early Stopping. It is used to avoid overfitting of the neural network. This occurs when the neural network is being trained for too long so it starts to learn the noise of the training-set, which causes the neural network to mis-classify new images.\nOverfitting is not really a problem for the neural network used in this tutorial on the MNIST data-set for recognizing hand-written digits. But this tutorial demonstrates the idea of Early Stopping.\nThis builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text in this tutorial is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials.\nFlowchart\nThe following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial #02 for a more detailed description of this network and convolution in general.", "from IPython.display import Image\nImage('images/02_network_flowchart.png')", "Imports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix\nimport time\nfrom datetime import timedelta\nimport math\nimport os\n\n# Use PrettyTensor to simplify Neural Network construction.\nimport prettytensor as pt", "This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:", "tf.__version__", "Load Data\nThe MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.", "from tensorflow.examples.tutorials.mnist import input_data\ndata = input_data.read_data_sets('data/MNIST/', one_hot=True)", "The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.", "print(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(data.train.labels)))\nprint(\"- Test-set:\\t\\t{}\".format(len(data.test.labels)))\nprint(\"- Validation-set:\\t{}\".format(len(data.validation.labels)))", "The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.", "data.test.cls = np.argmax(data.test.labels, axis=1)\ndata.validation.cls = np.argmax(data.validation.labels, axis=1)", "Data Dimensions\nThe data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.", "# We know that MNIST images are 28 pixels in each dimension.\nimg_size = 28\n\n# Images are stored in one-dimensional arrays of this length.\nimg_size_flat = img_size * img_size\n\n# Tuple with height and width of images used to reshape arrays.\nimg_shape = (img_size, img_size)\n\n# Number of colour channels for the images: 1 channel for gray-scale.\nnum_channels = 1\n\n# Number of classes, one class for each of 10 digits.\nnum_classes = 10", "Helper-function for plotting images\nFunction used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.", "def plot_images(images, cls_true, cls_pred=None):\n assert len(images) == len(cls_true) == 9\n \n # Create figure with 3x3 sub-plots.\n fig, axes = plt.subplots(3, 3)\n fig.subplots_adjust(hspace=0.3, wspace=0.3)\n\n for i, ax in enumerate(axes.flat):\n # Plot image.\n ax.imshow(images[i].reshape(img_shape), cmap='binary')\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true[i])\n else:\n xlabel = \"True: {0}, Pred: {1}\".format(cls_true[i], cls_pred[i])\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Plot a few images to see if data is correct", "# Get the first images from the test-set.\nimages = data.test.images[0:9]\n\n# Get the true classes for those images.\ncls_true = data.test.cls[0:9]\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true)", "TensorFlow Graph\nThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.\nTensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.\nTensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.\nA TensorFlow graph consists of the following parts which will be detailed below:\n\nPlaceholder variables used for inputting data to the graph.\nVariables that are going to be optimized so as to make the convolutional network perform better.\nThe mathematical formulas for the convolutional network.\nA loss measure that can be used to guide the optimization of the variables.\nAn optimization method which updates the variables.\n\nIn addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.\nPlaceholder variables\nPlaceholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.\nFirst we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.", "x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')", "The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:", "x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])", "Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.", "y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')", "We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.", "y_true_cls = tf.argmax(y_true, dimension=1)", "Neural Network\nThis section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03.\nThe basic idea is to wrap the input tensor x_image in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.", "x_pretty = pt.wrap(x_image)", "Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.\nNote that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.", "with pt.defaults_scope(activation_fn=tf.nn.relu):\n y_pred, loss = x_pretty.\\\n conv2d(kernel=5, depth=16, name='layer_conv1').\\\n max_pool(kernel=2, stride=2).\\\n conv2d(kernel=5, depth=36, name='layer_conv2').\\\n max_pool(kernel=2, stride=2).\\\n flatten().\\\n fully_connected(size=128, name='layer_fc1').\\\n softmax_classifier(class_count=10, labels=y_true)", "Getting the Weights\nFurther below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.\nWe used the names layer_conv1 and layer_conv2 for the two convolutional layers. These are also called variable scopes (not to be confused with defaults_scope as described above). Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.\nThe implementation is somewhat awkward because we have to use the TensorFlow function get_variable() which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.", "def get_weights_variable(layer_name):\n # Retrieve an existing variable named 'weights' in the scope\n # with the given layer_name.\n # This is awkward because the TensorFlow function was\n # really intended for another purpose.\n\n with tf.variable_scope(layer_name, reuse=True):\n variable = tf.get_variable('weights')\n\n return variable", "Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: contents = session.run(weights_conv1) as demonstrated further below.", "weights_conv1 = get_weights_variable(layer_name='layer_conv1')\nweights_conv2 = get_weights_variable(layer_name='layer_conv2')", "Optimization Method\nPretty Tensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.\nIt is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss.\nNote that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.", "optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)", "Performance Measures\nWe need a few more performance measures to display the progress to the user.\nFirst we calculate the predicted class number from the output of the neural network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.", "y_pred_cls = tf.argmax(y_pred, dimension=1)", "Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.", "correct_prediction = tf.equal(y_pred_cls, y_true_cls)", "The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))", "Saver\nIn order to save the variables of the neural network, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below in the optimize()-function.", "saver = tf.train.Saver()", "The saved files are often called checkpoints because they may be written at regular intervals during optimization.\nThis is the directory used for saving and retrieving the data.", "save_dir = 'checkpoints/'", "Create the directory if it does not exist.", "if not os.path.exists(save_dir):\n os.makedirs(save_dir)", "This is the path for the checkpoint-file.", "save_path = save_dir + 'best_validation'", "TensorFlow Run\nCreate TensorFlow session\nOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.", "session = tf.Session()", "Initialize variables\nThe variables for weights and biases must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it again below.", "def init_variables():\n session.run(tf.initialize_all_variables())", "Execute the function now to initialize the variables.", "init_variables()", "Helper-function to perform optimization iterations\nThere are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.\nIf your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.", "train_batch_size = 64", "The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.", "# Best validation accuracy seen so far.\nbest_validation_accuracy = 0.0\n\n# Iteration-number for last improvement to validation accuracy.\nlast_improvement = 0\n\n# Stop optimization if no improvement found in this many iterations.\nrequire_improvement = 1000", "Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.", "# Counter for total number of iterations performed so far.\ntotal_iterations = 0\n\ndef optimize(num_iterations):\n # Ensure we update the global variables rather than local copies.\n global total_iterations\n global best_validation_accuracy\n global last_improvement\n\n # Start-time used for printing time-usage below.\n start_time = time.time()\n\n for i in range(num_iterations):\n\n # Increase the total number of iterations performed.\n # It is easier to update it in each iteration because\n # we need this number several times in the following.\n total_iterations += 1\n\n # Get a batch of training examples.\n # x_batch now holds a batch of images and\n # y_true_batch are the true labels for those images.\n x_batch, y_true_batch = data.train.next_batch(train_batch_size)\n\n # Put the batch into a dict with the proper names\n # for placeholder variables in the TensorFlow graph.\n feed_dict_train = {x: x_batch,\n y_true: y_true_batch}\n\n # Run the optimizer using this batch of training data.\n # TensorFlow assigns the variables in feed_dict_train\n # to the placeholder variables and then runs the optimizer.\n session.run(optimizer, feed_dict=feed_dict_train)\n\n # Print status every 100 iterations and after last iteration.\n if (total_iterations % 100 == 0) or (i == (num_iterations - 1)):\n\n # Calculate the accuracy on the training-batch.\n acc_train = session.run(accuracy, feed_dict=feed_dict_train)\n\n # Calculate the accuracy on the validation-set.\n # The function returns 2 values but we only need the first.\n acc_validation, _ = validation_accuracy()\n\n # If validation accuracy is an improvement over best-known.\n if acc_validation > best_validation_accuracy:\n # Update the best-known validation accuracy.\n best_validation_accuracy = acc_validation\n \n # Set the iteration for the last improvement to current.\n last_improvement = total_iterations\n\n # Save all variables of the TensorFlow graph to file.\n saver.save(sess=session, save_path=save_path)\n\n # A string to be printed below, shows improvement found.\n improved_str = '*'\n else:\n # An empty string to be printed below.\n # Shows that no improvement was found.\n improved_str = ''\n \n # Status-message for printing.\n msg = \"Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}\"\n\n # Print it.\n print(msg.format(i + 1, acc_train, acc_validation, improved_str))\n\n # If no improvement found in the required number of iterations.\n if total_iterations - last_improvement > require_improvement:\n print(\"No improvement found in a while, stopping optimization.\")\n\n # Break out from the for-loop.\n break\n\n # Ending time.\n end_time = time.time()\n\n # Difference between start and end-times.\n time_dif = end_time - start_time\n\n # Print the time-usage.\n print(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))", "Helper-function to plot example errors\nFunction for plotting examples of images from the test-set that have been mis-classified.", "def plot_example_errors(cls_pred, correct):\n # This function is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # correct is a boolean array whether the predicted class\n # is equal to the true class for each image in the test-set.\n\n # Negate the boolean array.\n incorrect = (correct == False)\n \n # Get the images from the test-set that have been\n # incorrectly classified.\n images = data.test.images[incorrect]\n \n # Get the predicted classes for those images.\n cls_pred = cls_pred[incorrect]\n\n # Get the true classes for those images.\n cls_true = data.test.cls[incorrect]\n \n # Plot the first 9 images.\n plot_images(images=images[0:9],\n cls_true=cls_true[0:9],\n cls_pred=cls_pred[0:9])", "Helper-function to plot confusion matrix", "def plot_confusion_matrix(cls_pred):\n # This is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # Get the true classifications for the test-set.\n cls_true = data.test.cls\n \n # Get the confusion matrix using sklearn.\n cm = confusion_matrix(y_true=cls_true,\n y_pred=cls_pred)\n\n # Print the confusion matrix as text.\n print(cm)\n\n # Plot the confusion matrix as an image.\n plt.matshow(cm)\n\n # Make various adjustments to the plot.\n plt.colorbar()\n tick_marks = np.arange(num_classes)\n plt.xticks(tick_marks, range(num_classes))\n plt.yticks(tick_marks, range(num_classes))\n plt.xlabel('Predicted')\n plt.ylabel('True')\n\n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Helper-functions for calculating classifications\nThis function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.\nThe calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.", "# Split the data-set in batches of this size to limit RAM usage.\nbatch_size = 256\n\ndef predict_cls(images, labels, cls_true):\n # Number of images.\n num_images = len(images)\n\n # Allocate an array for the predicted classes which\n # will be calculated in batches and filled into this array.\n cls_pred = np.zeros(shape=num_images, dtype=np.int)\n\n # Now calculate the predicted classes for the batches.\n # We will just iterate through all the batches.\n # There might be a more clever and Pythonic way of doing this.\n\n # The starting index for the next batch is denoted i.\n i = 0\n\n while i < num_images:\n # The ending index for the next batch is denoted j.\n j = min(i + batch_size, num_images)\n\n # Create a feed-dict with the images and labels\n # between index i and j.\n feed_dict = {x: images[i:j, :],\n y_true: labels[i:j, :]}\n\n # Calculate the predicted class using TensorFlow.\n cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)\n\n # Set the start-index for the next batch to the\n # end-index of the current batch.\n i = j\n\n # Create a boolean array whether each image is correctly classified.\n correct = (cls_true == cls_pred)\n\n return correct, cls_pred", "Calculate the predicted class for the test-set.", "def predict_cls_test():\n return predict_cls(images = data.test.images,\n labels = data.test.labels,\n cls_true = data.test.cls)", "Calculate the predicted class for the validation-set.", "def predict_cls_validation():\n return predict_cls(images = data.validation.images,\n labels = data.validation.labels,\n cls_true = data.validation.cls)", "Helper-functions for the classification accuracy\nThis function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4", "def cls_accuracy(correct):\n # Calculate the number of correctly classified images.\n # When summing a boolean array, False means 0 and True means 1.\n correct_sum = correct.sum()\n\n # Classification accuracy is the number of correctly classified\n # images divided by the total number of images in the test-set.\n acc = float(correct_sum) / len(correct)\n\n return acc, correct_sum", "Calculate the classification accuracy on the validation-set.", "def validation_accuracy():\n # Get the array of booleans whether the classifications are correct\n # for the validation-set.\n # The function returns two values but we only need the first.\n correct, _ = predict_cls_validation()\n \n # Calculate the classification accuracy and return it.\n return cls_accuracy(correct)", "Helper-function for showing the performance\nFunction for printing the classification accuracy on the test-set.\nIt takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.", "def print_test_accuracy(show_example_errors=False,\n show_confusion_matrix=False):\n\n # For all the images in the test-set,\n # calculate the predicted classes and whether they are correct.\n correct, cls_pred = predict_cls_test()\n\n # Classification accuracy and the number of correct classifications.\n acc, num_correct = cls_accuracy(correct)\n \n # Number of images being classified.\n num_images = len(correct)\n\n # Print the accuracy.\n msg = \"Accuracy on Test-Set: {0:.1%} ({1} / {2})\"\n print(msg.format(acc, num_correct, num_images))\n\n # Plot some examples of mis-classifications, if desired.\n if show_example_errors:\n print(\"Example errors:\")\n plot_example_errors(cls_pred=cls_pred, correct=correct)\n\n # Plot the confusion matrix, if desired.\n if show_confusion_matrix:\n print(\"Confusion Matrix:\")\n plot_confusion_matrix(cls_pred=cls_pred)", "Helper-function for plotting convolutional weights", "def plot_conv_weights(weights, input_channel=0):\n # Assume weights are TensorFlow ops for 4-dim variables\n # e.g. weights_conv1 or weights_conv2.\n\n # Retrieve the values of the weight-variables from TensorFlow.\n # A feed-dict is not necessary because nothing is calculated.\n w = session.run(weights)\n\n # Print mean and standard deviation.\n print(\"Mean: {0:.5f}, Stdev: {1:.5f}\".format(w.mean(), w.std()))\n \n # Get the lowest and highest values for the weights.\n # This is used to correct the colour intensity across\n # the images so they can be compared with each other.\n w_min = np.min(w)\n w_max = np.max(w)\n\n # Number of filters used in the conv. layer.\n num_filters = w.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot all the filter-weights.\n for i, ax in enumerate(axes.flat):\n # Only plot the valid filter-weights.\n if i<num_filters:\n # Get the weights for the i'th filter of the input channel.\n # The format of this 4-dim tensor is determined by the\n # TensorFlow API. See Tutorial #02 for more details.\n img = w[:, :, input_channel, i]\n\n # Plot image.\n ax.imshow(img, vmin=w_min, vmax=w_max,\n interpolation='nearest', cmap='seismic')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Performance before any optimization\nThe accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.", "print_test_accuracy()", "The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.", "plot_conv_weights(weights=weights_conv1)", "Perform 10,000 optimization iterations\nWe now perform 10,000 optimization iterations and abort the optimization if no improvement is found on the validation-set in 1000 iterations.\nAn asterisk * is shown if the classification accuracy on the validation-set is an improvement.", "optimize(num_iterations=10000)\n\nprint_test_accuracy(show_example_errors=True,\n show_confusion_matrix=True)", "The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization.\nBut try and save the images and compare them side-by-side (you can just right-click the image to save it). You will notice very small differences before and after optimization.\nThe mean and standard deviation has also changed slightly, so the optimized weights must be different.", "plot_conv_weights(weights=weights_conv1)", "Initialize Variables Again\nRe-initialize all the variables of the neural network with random values.", "init_variables()", "This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.", "print_test_accuracy()", "The convolutional weights should now be different from the weights shown above.", "plot_conv_weights(weights=weights_conv1)", "Restore Best Variables\nRe-load all the variables that were saved to file during optimization.", "saver.restore(sess=session, save_path=save_path)", "The classification accuracy is high again when using the variables that were previously saved.\nNote that the classification accuracy may be slightly higher or lower than that reported above, because the variables in the file were chosen to maximize the classification accuracy on the validation-set, but the optimization actually continued for another 1000 iterations after saving those variables, so we are reporting the results for two slightly different sets of variables. Sometimes this leads to slightly better or worse performance on the test-set.", "print_test_accuracy(show_example_errors=True,\n show_confusion_matrix=True)", "The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.", "plot_conv_weights(weights=weights_conv1)", "Close TensorFlow Session\nWe are now done using TensorFlow, so we close the session to release its resources.", "# This has been commented out in case you want to modify and experiment\n# with the Notebook without having to restart it.\n# session.close()", "Conclusion\nThis tutorial showed how to save and retrieve the variables of a neural network in TensorFlow. This can be used in different ways. For example, if you want to use a neural network for recognizing images then you only have to train the network once and you can then deploy the finished network on other computers.\nAnother use of checkpoints is if you have a very large neural network and data-set, then you may want to save checkpoints at regular intervals in case the computer crashes, so you can continue the optimization at a recent checkpoint instead of having to restart the optimization from the beginning.\nThis tutorial also showed how to use the validation-set for so-called Early Stopping, where the optimization was aborted if it did not regularly improve the validation error. This is useful if the neural network starts to overfit and learn the noise of the training-set; although it was not really an issue with the convolutional network and MNIST data-set used in this tutorial.\nAn interesting observation was that the convolutional weights (or filters) changed very little from the optimization, even though the performance of the network went from random guesses to near-perfect classification. It seems strange that the random weights were almost good enough. Why do you think this happens?\nExercises\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\nYou may want to backup this Notebook before making any changes.\n\nOptimization is stopped after 1000 iterations without improvement. Is this enough? Can you think of a better way to do Early Stopping? Try and implement it.\nIf the checkpoint file already exists then load it instead of doing the optimization.\nSave a new checkpoint for every 100 optimization iterations. Retrieve the latest using saver.latest_checkpoint(). Why would you want to save multiple checkpionts instead of just the most recent?\nTry and change the neural network, e.g. by adding another layer. What happens when you reload the variables from a different network?\nPlot the weights for the 2nd convolutional layer before and after optimization using the function plot_conv_weights(). Are they almost identical as well?\nWhy do you think the optimized convolutional weights are almost the same as the random initialization?\nRemake the program yourself without looking too much at this source-code.\nExplain to a friend how the program works.\n\nLicense (MIT)\nCopyright (c) 2016 by Magnus Erik Hvass Pedersen\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
carthach/essentia
src/examples/tutorial/example_discontinuitydetector.ipynb
agpl-3.0
[ "DiscontinuityDetector use example\nThis algorithm uses LPC and some heuristics to detect discontinuities in anaudio signal. [1].\n References:\n [1] Mühlbauer, R. (2010). Automatic Audio Defect Detection.", "import essentia.standard as es\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import Audio \nfrom essentia import array as esarr\nplt.rcParams[\"figure.figsize\"] =(12,9)\n\ndef compute(x, frame_size=1024, hop_size=512, **kwargs):\n discontinuityDetector = es.DiscontinuityDetector(frameSize=frame_size,\n hopSize=hop_size, \n **kwargs)\n locs = []\n amps = []\n for idx, frame in enumerate(es.FrameGenerator(x, frameSize=frame_size,\n hopSize=hop_size, startFromZero=True)):\n frame_locs, frame_ampls = discontinuityDetector(frame)\n\n for l in frame_locs:\n locs.append((l + hop_size * idx) / 44100.)\n for a in frame_ampls:\n amps.append(a)\n\n return locs, amps", "Generating some discontinuities examples\nLet's start by degrading some audio files with some discontinuities. Discontinuities are generally occasioned by hardware issues in the process of recording or copying. Let's simulate this by removing a random number of samples from the input audio file.", " def testRegression(self, frameSize=512, hopSize=256):\n fs = 44100\n\n audio = MonoLoader(filename=join(testdata.audio_dir,\n 'recorded/cat_purrrr.wav'),\n sampleRate=fs)()\n\n originalLen = len(audio)\n startJump = originalLen / 4\n groundTruth = [startJump / float(fs)]\n\n # make sure that the artificial jump produces a prominent discontinuity\n if audio[startJump] > 0:\n end = next(idx for idx, i in enumerate(audio[startJump:]) if i < -.3)\n else:\n end = next(idx for idx, i in enumerate(audio[startJump:]) if i > .3)\n\n endJump = startJump + end\n audio = esarr(np.hstack([audio[:startJump], audio[endJump:]]))\n\n frameList = []\n discontinuityDetector = self.InitDiscontinuityDetector(\n frameSize=frameSize, hopSize=hopSize,\n detectionThreshold=10)\n\n for idx, frame in enumerate(FrameGenerator(\n audio, frameSize=frameSize,\n hopSize=hopSize, startFromZero=True)):\n locs, _ = discontinuityDetector(frame)\n if not len(locs) == 0:\n for loc in locs:\n frameList.append((idx * hopSize + loc) / float(fs))\n\n self.assertAlmostEqualVector(frameList, groundTruth, 1e-7)\n\nfs = 44100.\n\naudio_dir = '../../audio/'\naudio = es.MonoLoader(filename='{}/{}'.format(audio_dir,\n 'recorded/vignesh.wav'),\n sampleRate=fs)()\n\noriginalLen = len(audio)\nstartJumps = np.array([originalLen / 4, originalLen / 2])\ngroundTruth = startJumps / float(fs)\n\nfor startJump in startJumps:\n # make sure that the artificial jump produces a prominent discontinuity\n if audio[startJump] > 0:\n end = next(idx for idx, i in enumerate(audio[startJump:]) if i < -.3)\n else:\n end = next(idx for idx, i in enumerate(audio[startJump:]) if i > .3)\n\n endJump = startJump + end\n audio = esarr(np.hstack([audio[:startJump], audio[endJump:]]))\n\n\nfor point in groundTruth:\n l1 = plt.axvline(point, color='g', alpha=.5)\n\ntimes = np.linspace(0, len(audio) / fs, len(audio))\nplt.plot(times, audio)\nplt.title('Signal with artificial clicks of different amplitudes')\nl1.set_label('Click locations')\nplt.legend()\n", "Lets listen to the clip to have an idea on how audible the discontinuities are", "Audio(audio, rate=fs)", "The algorithm\nThis algorithm outputs the starts and ends timestapms of the clicks. The following plots show how the algorithm performs in the previous examples", "locs, amps = compute(audio)\n\nfig, ax = plt.subplots(len(groundTruth))\nplt.subplots_adjust(hspace=.4)\nfor idx, point in enumerate(groundTruth):\n l1 = ax[idx].axvline(locs[idx], color='r', alpha=.5)\n l2 = ax[idx].axvline(point, color='g', alpha=.5)\n ax[idx].plot(times, audio)\n ax[idx].set_xlim([point-.001, point+.001])\n ax[idx].set_title('Click located at {:.2f}s'.format(point))\n \n \n fig.legend((l1, l2), ('Detected discontinuity', 'Ground truth'), 'upper right')", "The parameters\nthis is an explanation of the most relevant parameters of the algorithm\n\n\ndetectionThreshold. This parameter controls de detection sensibility of the algorithm \n\n\nkernelSize. A scalar giving the size of the median filter window. The window has to be as small as possible to improve the whitening of the signal but big enough to skip peaky outlayers from the prediction error signal. \n\n\norder. The order for the LPC. As a rule of thumb, use 2 coefficients for each format on the input signal. However, it was empirically found that modelling more than 5 formats did not improve the clip detection on music.\n\n\nsilenceThresholdder. It makes no sense to process silent frames as even if there are events looking like discontinuities they can't be heard\n\n\nsubFrameSize. If was found that frames that are partially silent are suitable for fake detections. This is because the audio is modelled as an autoregressive process in which discontinuities are easily detected as peaks on the prediction error. However, if the autoregressive assumption is no longer true, unexpected events can produce error peaks. Thus, a subframe window is used to mask out the silent part of the frame so they don't interfere in the autoregressive parameter estimation.\n\n\nenergyThreshold. threshold in dB to detect silent subframes." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
esa-as/2016-ml-contest
HouMath/Face_classification_HouMath_XGB_01.ipynb
apache-2.0
[ "In this notebook, we mainly utilize extreme gradient boost to improve the prediction model originially proposed in TLE 2016 November machine learning tuotrial. Extreme gradient boost can be viewed as an enhanced version of gradient boost by using a more regularized model formalization to control over-fitting, and XGB usually performs better. Applications of XGB can be found in many Kaggle competitions. Some recommended tutorrials can be found\nOur work will be orginized in the follwing order:\n•Background\n•Exploratory Data Analysis\n•Data Prepration and Model Selection\n•Final Results\nBackground\nThe dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).\nThe dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.\nThis data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.\nThe seven predictor variables are:\n•Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10), photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.\n•Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)\nThe nine discrete facies (classes of rocks) are:\n1.Nonmarine sandstone\n2.Nonmarine coarse siltstone \n3.Nonmarine fine siltstone \n4.Marine siltstone and shale \n5.Mudstone (limestone)\n6.Wackestone (limestone)\n7.Dolomite\n8.Packstone-grainstone (limestone)\n9.Phylloid-algal bafflestone (limestone)\nThese facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.\nFacies/ Label/ Adjacent Facies\n1 SS 2 \n2 CSiS 1,3 \n3 FSiS 2 \n4 SiSh 5 \n5 MS 4,6 \n6 WS 5,7 \n7 D 6,8 \n8 PS 6,7,9 \n9 BS 7,8 \nThe first thing we notice for this data is that it seems that neighboring facies are not symmetric, for example, the adjacent facies for 9 could be 7, yet the adjacent facies for 7 couldn't be 9. We already contacted the authors regarding this. \nExprolatory Data Analysis\nAfter the background intorduction, we start to import the pandas library for some basic data analysis and manipulation. The matplotblib and seaborn are imported for data vislization.", "%matplotlib inline\nimport pandas as pd\nfrom pandas.tools.plotting import scatter_matrix\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport seaborn as sns\nimport matplotlib.colors as colors\n\nfilename = '../facies_vectors.csv'\ntraining_data = pd.read_csv(filename)\ntraining_data\n\ntraining_data['Well Name'] = training_data['Well Name'].astype('category')\ntraining_data['Formation'] = training_data['Formation'].astype('category')\ntraining_data.info()\n\nfacies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00','#1B4F72',\n '#2E86C1', '#AED6F1', '#A569BD', '#196F3D']\n\nfacies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']\n\nfacies_counts = training_data['Facies'].value_counts().sort_index()\nfacies_counts.index = facies_labels\nfacies_counts.plot(kind='bar',color=facies_colors,title='Distribution of Training Data by Facies')\n\nsns.heatmap(training_data.corr(), vmax=1.0, square=True)\n\ntraining_data.describe()", "Data Preparation and Model Selection\nNow we are ready to test the XGB approach, along the way confusion matrix and f1_score are imported as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.", "import xgboost as xgb\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix, f1_score\nfrom classification_utilities import display_cm, display_adj_cm\nfrom sklearn.model_selection import GridSearchCV\n\nX_train = training_data.drop(['Facies', 'Well Name','Formation','Depth'], axis = 1 ) \nY_train = training_data['Facies' ] - 1\ndtrain = xgb.DMatrix(X_train, Y_train)", "The accuracy function and accuracy_adjacent function are defined in teh following to quatify the prediction correctness.", "def accuracy(conf):\n total_correct = 0.\n nb_classes = conf.shape[0]\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n acc = total_correct/sum(sum(conf))\n return acc\n\nadjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])\n\ndef accuracy_adjacent(conf, adjacent_facies):\n nb_classes = conf.shape[0]\n total_correct = 0.\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n for j in adjacent_facies[i]:\n total_correct += conf[i][j]\n return total_correct / sum(sum(conf))", "Initial model", "# Proposed Initial Model\nxgb1 = xgb.XGBClassifier( learning_rate =0.1, n_estimators=200, max_depth=5,\n min_child_weight=1, gamma=0, subsample=0.6,\n colsample_bytree=0.6, reg_alpha=0, reg_lambda=1, objective='multi:softmax',\n nthread=4, scale_pos_weight=1, seed=100)\n\n\n#Fit the algorithm on the data\nxgb1.fit(X_train, Y_train,eval_metric='merror')\n\n#Predict training set:\npredictions = xgb1.predict(X_train)\n \n#Print model report\n\n# Confusion Matrix\nconf = confusion_matrix(Y_train, predictions)\n\n# Print Results\nprint (\"\\nModel Report\")\nprint (\"-Accuracy: %.6f\" % ( accuracy(conf) ))\nprint (\"-Adjacent Accuracy: %.6f\" % ( accuracy_adjacent(conf, adjacent_facies) ))\n\nprint (\"\\nConfusion Matrix\")\ndisplay_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)\n\n# Print Feature Importance\nfeat_imp = pd.Series(xgb1.booster().get_fscore()).sort_values(ascending=False)\nfeat_imp.plot(kind='bar', title='Feature Importances')\nplt.ylabel('Feature Importance Score')\n\n# Cross Validation parameters\ncv_folds = 10\nrounds = 100\n\nxgb_param_1 = xgb1.get_xgb_params()\nxgb_param_1['num_class'] = 9\n\n# Perform cross-validation\ncvresult1 = xgb.cv(xgb_param_1, dtrain, num_boost_round=xgb_param_1['n_estimators'], \n stratified = True, nfold=cv_folds, metrics='merror', early_stopping_rounds=rounds)\n\nprint (\"\\nCross Validation Training Report Summary\")\nprint (cvresult1.head())\nprint (cvresult1.tail())", "The typical range for learning rate is around 0.01~0.2, so we vary ther learning rate a bit and at the same time, scan over the number of boosted trees to fit. This will take a little bit of time to finish.", "print(\"Parameter optimization\")\ngrid_search1 = GridSearchCV(xgb1,{'learning_rate':[0.05,0.01,0.1,0.2] , 'n_estimators':[200,400,600,800]},\n scoring='accuracy' , n_jobs = 4)\ngrid_search1.fit(X_train,Y_train)\nprint(\"Best Set of Parameters\")\ngrid_search1.grid_scores_, grid_search1.best_params_, grid_search1.best_score_", "It seems that we need to adjust the learning rate and make it smaller, which could help to reduce overfitting in my opinion. The number of boosted trees to fit also requires to be updated.", "# Proposed Model with optimized learning rate and number of boosted trees to fit\nxgb2 = xgb.XGBClassifier( learning_rate =0.01, n_estimators=400, max_depth=5,\n min_child_weight=1, gamma=0, subsample=0.6,\n colsample_bytree=0.6, reg_alpha=0, reg_lambda=1, objective='multi:softmax',\n nthread=4, scale_pos_weight=1, seed=100)\n\n#Fit the algorithm on the data\nxgb2.fit(X_train, Y_train,eval_metric='merror')\n\n#Predict training set:\npredictions = xgb2.predict(X_train)\n \n#Print model report\n\n# Confusion Matrix\nconf = confusion_matrix(Y_train, predictions )\n\n# Print Results\nprint (\"\\nModel Report\")\nprint (\"-Accuracy: %.6f\" % ( accuracy(conf) ))\nprint (\"-Adjacent Accuracy: %.6f\" % ( accuracy_adjacent(conf, adjacent_facies) ))\n\n# Confusion Matrix\nprint (\"\\nConfusion Matrix\")\ndisplay_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)\n\n# Print Feature Importance\nfeat_imp = pd.Series(xgb2.booster().get_fscore()).sort_values(ascending=False)\nfeat_imp.plot(kind='bar', title='Feature Importances')\nplt.ylabel('Feature Importance Score')\n\n# Cross Validation parameters\ncv_folds = 10\nrounds = 100\n\nxgb_param_2 = xgb2.get_xgb_params()\nxgb_param_2['num_class'] = 9\n\n# Perform cross-validation\ncvresult2 = xgb.cv(xgb_param_2, dtrain, num_boost_round=xgb_param_2['n_estimators'], \n stratified = True, nfold=cv_folds, metrics='merror', early_stopping_rounds=rounds)\n\nprint (\"\\nCross Validation Training Report Summary\")\nprint (cvresult2.head())\nprint (cvresult2.tail())\n\nprint(\"Parameter optimization\")\ngrid_search2 = GridSearchCV(xgb2,{'reg_alpha':[0, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10], 'reg_lambda':[0, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10] },\n scoring='accuracy' , n_jobs = 4)\ngrid_search2.fit(X_train,Y_train)\nprint(\"Best Set of Parameters\")\ngrid_search2.grid_scores_, grid_search2.best_params_, grid_search2.best_score_\n\n# Proposed Model with optimized regularization \nxgb3 = xgb.XGBClassifier( learning_rate =0.01, n_estimators=400, max_depth=5,\n min_child_weight=1, gamma=0, subsample=0.6,\n colsample_bytree=0.6, reg_alpha=0.1, reg_lambda=0.5, objective='multi:softmax',\n nthread=4, scale_pos_weight=1, seed=100)\n\n#Fit the algorithm on the data\nxgb3.fit(X_train, Y_train,eval_metric='merror')\n\n#Predict training set:\npredictions = xgb3.predict(X_train)\n \n#Print model report\n\n# Confusion Matrix\nconf = confusion_matrix(Y_train, predictions )\n\n# Print Results\nprint (\"\\nModel Report\")\nprint (\"-Accuracy: %.6f\" % ( accuracy(conf) ))\nprint (\"-Adjacent Accuracy: %.6f\" % ( accuracy_adjacent(conf, adjacent_facies) ))\n\n# Confusion Matrix\nprint (\"\\nConfusion Matrix\")\ndisplay_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)\n\n# Print Feature Importance\nfeat_imp = pd.Series(xgb3.booster().get_fscore()).sort_values(ascending=False)\nfeat_imp.plot(kind='bar', title='Feature Importances')\nplt.ylabel('Feature Importance Score')\n\nprint(\"Parameter optimization\")\ngrid_search3 = GridSearchCV(xgb3,{'max_depth':[2, 5, 8], 'gamma':[0, 1], 'subsample':[0.4, 0.6, 0.8],'colsample_bytree':[0.4, 0.6, 0.8] },\n scoring='accuracy' , n_jobs = 4)\ngrid_search3.fit(X_train,Y_train)\nprint(\"Best Set of Parameters\")\ngrid_search3.grid_scores_, grid_search3.best_params_, grid_search3.best_score_\n\n# Load data \nfilename = '../facies_vectors.csv'\ndata = pd.read_csv(filename)\n\n# Change to category data type\ndata['Well Name'] = data['Well Name'].astype('category')\ndata['Formation'] = data['Formation'].astype('category')\n\n# Leave one well out for cross validation \nwell_names = data['Well Name'].unique()\nf1=[]\nfor i in range(len(well_names)):\n \n # Split data for training and testing\n X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 ) \n Y_train = data['Facies' ] - 1\n \n train_X = X_train[X_train['Well Name'] != well_names[i] ]\n train_Y = Y_train[X_train['Well Name'] != well_names[i] ]\n test_X = X_train[X_train['Well Name'] == well_names[i] ]\n test_Y = Y_train[X_train['Well Name'] == well_names[i] ]\n\n train_X = train_X.drop(['Well Name'], axis = 1 ) \n test_X = test_X.drop(['Well Name'], axis = 1 )\n\n # Final recommended model based on the extensive parameters search\n model_final = xgb.XGBClassifier( learning_rate =0.01, n_estimators=400, max_depth=5,\n min_child_weight=1, gamma=0, subsample=0.6, reg_alpha=0.1, reg_lambda=0.5,\n colsample_bytree=0.6, objective='multi:softmax',\n nthread=4, scale_pos_weight=1, seed=100)\n\n # Train the model based on training data\n model_final.fit( train_X , train_Y , eval_metric = 'merror' )\n\n\n # Predict on the test set\n predictions = model_final.predict(test_X)\n\n # Print report\n print (\"\\n------------------------------------------------------\")\n print (\"Validation on the leaving out well \" + well_names[i])\n conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )\n print (\"\\nModel Report\")\n print (\"-Accuracy: %.6f\" % ( accuracy(conf) ))\n print (\"-Adjacent Accuracy: %.6f\" % ( accuracy_adjacent(conf, adjacent_facies) ))\n print (\"-F1 Score: %.6f\" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) ))\n f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))\n facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',\n 'WS', 'D','PS', 'BS']\n print (\"\\nConfusion Matrix Results\")\n from classification_utilities import display_cm, display_adj_cm\n display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)\n \nprint (\"\\n------------------------------------------------------\")\nprint (\"Final Results\")\nprint (\"-Average F1 Score: %6f\" % (sum(f1)/(1.0*len(f1))))\n\n# Load test data\ntest_data = pd.read_csv('../validation_data_nofacies.csv')\ntest_data['Well Name'] = test_data['Well Name'].astype('category')\nX_test = test_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)\n# Predict facies of unclassified data\nY_predicted = model_final.predict(X_test)\ntest_data['Facies'] = Y_predicted + 1\n# Store the prediction\ntest_data.to_csv('Prediction1.csv')\n\ntest_data", "Future work, make more customerized objective function. Also, we could use RandomizedSearchCV instead of GridSearchCV to avoild potential local minimal trap and further improve the test results." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Olsthoorn/TransientGroundwaterFlow
exercises_notebooks/TheisWellFunction.ipynb
gpl-3.0
[ "Investigating the character of the Theis well function\nIntroduction\nIn the previous section the Theis well function was introduced. The function, which is in fact the function known as exponential integral by mathematicians, proved available in the standard library of Ptyhon module scipy.special. We modified it a little to make it match the Theis well function exactly and gave it the name \"W\" like it has in groundwater hydrology books. Then we used it in some examples.\nIn this chapter we will investigate the Theis well funchtion character a more accurately.\nInstead of looking for the function in the available library we could have computed the function ourselfs, for instance by numerical integration.\n$$ W(u) = \\intop_u^{-\\infty} \\frac {e^{-y}} y dy \\approx \\sum_0^N \\frac {e^{-y_i}} {y_i} \\Delta y_i $$\nwhere $y_0 = u_0$ and $N$ has a a sufficiently large value.", "import scipy.special as sp\n\nimport numpy as np\nfrom scipy.special import expi\n\ndef W(u): return -expi(-u)\n\ndef W1(u):\n \"\"\"Returns Theis' well function axpproximation by numerical intergration\n \n Works only for scalar u\n \"\"\"\n if not np.isscalar(u):\n raise ValueError(\"\",\"u must be a scalar\")\n \n LOG10INF = 2 # sufficient as exp(-100) is in the order of 1e-50\n y = np.logspace(np.log10(u), LOG10INF, 1000) # we use thousand intermediate values\n ym = 0.5 * (y[:-1] + y[1:])\n Dy = np.diff(y)\n w = np.sum( np.exp(-ym) / ym * Dy )\n return w", "Try it out", "U = 4 * 10** -np.arange(11.) # generates values 4, 4e-1, 4e-2 .. 4e-10\nprint(\"{:>10s} {:>10s} {:>10s}\".format('u ', 'W(u)','W1(u) '))\nfor u in U:\n print(\"{0:10.1e} {1:10.4e} {2:10.4e}\".format(u, W(u), W1(u)))", "Is seems that our numerical integration is a fair approximation to four significant digits, but not better, even when computed with 1000 steps as we did. So it is relatively easy to create one's own numerically computed value of an analytical expression like the exponential integral\nTheis well function as a power series\nThe theis well function can be expressed also as a power series. This expression has certain advanages as it gives insight into the behavior of its character and allows important simplifications and deductions.\n$$ W(u) = -0.5773 - \\ln(u) + u - \\frac {u^2} {2 . 2!} + \\frac {u^3} {3 . 3!} - \\frac {u^4} {4 . 4!} + ... $$\nThis series too can be readily numerially comptuted by first defining a function for it. The sum will be computed in a loop. To prevent having to compute faculties, it is easiest to compute each successive term from the previous one.\nSo to get from term m to term n+1:\n$$ \\frac {u^{n+1}} {(n+1) . (n+1)!} = \\frac {u^n} { n . n!} \\times \\frac {u \\, n} {(n+1)^2} $$\nThis series is implemented below.", "def W2(u):\n \"\"\"Returns Theis well function computed as a power series\"\"\"\n tol = 1e-5\n w = -0.5772 -np.log(u) + u\n a = u\n for n in range(1, 100):\n a = -a * u * n / (n+1)**2 # new term (next term)\n w += a\n if np.all(a) < tol:\n return w", "Compare the three methods of computing the well function.", "U = 4.0 * 10** -np.arange(11.) # generates values 4, 4e-1, 4e-2 .. 4e-10\nprint(\"{:>10s} {:>10s} {:>10s} {:>10s}\".format('u ', 'W(u) ','W1(u) ', 'W2(u) '))\nfor u in U:\n print(\"{0:10.1e} {1:10.4e} {2:10.4e} {2:10.4e}\".format(u, W(u), W1(u), W2(u)))", "We see that all three methods yiedld the same results.\nNext we show the well function as it shown in groundwater hydrology books.", "u = np.logspace(-7, 1, 71)\n\nimport matplotlib.pylab as plt\nfig1= plt.figure()\nax1 = fig1.add_subplot(111)\nax1.set(xlabel='1/u', ylabel='W(u)', title='Theis type curve versus u', yscale='log', xscale='log')\nax1.grid(True)\nax1.plot(u, W(u), 'b', label='-expi(-u)')\n#ax1.plot(u, W1(u), 'rx', label='integal') # works only for scalars\nax1.plot(u, W2(u), 'g+', label='power series')\nax1.legend(loc='best')\nplt.show()", "The curve W(u) versus u runs counter intuitively which and is, therefore, confusing. Therefore, it generally presented as W(u) versus 1/u instead as shown below", "fig2 = plt.figure()\nax2 = fig2.add_subplot(111)\nax2.set(xlabel='1/u', ylabel='W(u)', title='Theis type curve versus 1/u', yscale='log', xscale='log')\nax2.grid(True)\nax2.plot(1/u, W(u))\nplt.show()", "Now W(u) resembles the actual drawdown, which increases with time.\nThe reason that this is so, becomes clear from the fact that\n$$ u = \\frac {r^2 S} {4 kD t} $$\nand that\n$$ \\frac 1 u = \\frac {4 kDt} {r^2 S} = \\frac {4 kD} S \\frac t {r^2} $$\nwhich shows that $\\frac 1 u$ increases with time, so that the values of $\\frac 1 u$ on the $\\frac 1 u$ axis are propotional with time and so the drawdown, i.e., the well function $W(u)$ increases with $\\frac 1 u$, which is less confusing.\nThe graph of $W(u)$ versus $\\frac 1 u$ is called the Theis type curve. It's vertical axis is proportional to the drawdown and its horizontal axis proportional to time.\nThe same curve is shown below but now on linear vertical scale and a logarithmic horizontal scale. The vertical scale was reversed (see values on y-axis) to obtain a curve that illustrates the decline of groundwater head with time caused by the extraction. This way of presending is probably least confusing when reading the curve.", "fig2 = plt.figure()\nax2 = fig2.add_subplot(111)\nax2.set(xlabel='1/u', ylabel='W(u)', title='Theis type curve versus 1/u', yscale='linear', xscale='log')\nax2.grid(True)\nax2.plot(1/u, W(u))\nax2.invert_yaxis()\nplt.show()", "Logarithmic approximaion of the Theis type curve\nWe see that after some time, the drawdown is linear when only the time-axis is logarithmic. This suggests that a logarithmic approximation of time-drawdown curve is accurate after some time.\nThat this is indeed the case can be deduced from the power series description of the type curve:\n$$ W(u) = -0.5773 - \\ln(u) + u - \\frac {u^2} {2 . 2!} + \\frac {u^3} {3 . 3!} - \\frac {u^4} {4 . 4!} + ... $$\nIt is clear that all terms to the right of u will be smaller than u when $u<1$. Hence when u is so small that it can be neglected relative to $\\ln(u)$, then all the terms to the right of $\\ln(u)$ can be neglected. Therefore we have the following spproximation\n$$ W(u) \\approx -0.5772 -\\ln(u) + O(u) $$\nfor\n$$ -\\ln(u)>>u \\,\\,\\,\\rightarrow \\,\\,\\, \\ln(u)<<-u \\,\\,\\, \\rightarrow \\,\\,\\, u<<e^{-u} \\, \\approx \\,1 $$\nwhich is practically the case for $u<0.01$, as can also be seen in the graph for $1/u = 10^2 $. From the graph one may conclude that even for 1/u>10 or u<0.1, the logarithmic type curve is straight and therefore can be accurately computed using a logarithmic approximation of the type curve.\nBelow the error between the full Theis curve $W(u)$ and the approximation $Wu(u) = -0.5772 - \\ln(u)$ are computed and shown. This reveals that at $u=0.01$ the error is 5.4% and at $u=0.001$ it has come down to only 0.2%.", "U = np.logspace(-2, 0, 21)\n\nWa = lambda u : -0.5772 - np.log(u)\n\nprint(\"{:>12s} {:>12s} {:>12s} {:>12s}\".format('u','W(u)','Wa(u)','1-Wa(u)/W(u)'))\nprint(\"{:>12s} {:>12s} {:>12s} {:>12s}\".format(' ',' ',' ','the error'))\nfor u in U:\n print(\"{:12.3g} {:12.3g} {:12.3g} {:12.1%}\".format(u, W(u), Wa(u), 1-Wa(u)/W(u)))\n\nU = np.logspace(-7, 1, 81)\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.set(xlabel='1/u', ylabel='W(u)', title='Theis type curve and its logarithmic approximation', yscale='linear', xscale='log')\nax.grid(True)\nax.plot(1/U, W(U), 'b', linewidth = 2., label='Theis type curve')\nax.plot(1/U, Wa(U), 'r', linewidth = 0.25, label='log approximation')\nax.invert_yaxis()\nplt.legend(loc='best')\nplt.show()", "Hence, in any practical situation, the logarithmic approximation is accurate enough when $u<0.01$.\nThe approximatin of the Theis type curve can no be elaborated:\n$$ Wa (u) \\approx -0.5772 - \\ln(u) = \\ln(e^{-0.5772}) - \\ln(u) = \\ln(0.5615) - \\ln(u) = \\ln \\frac {0.5615} {u} $$\nBecause $u = \\frac {r^2 S} {4 kD t}$ we have, with 4\\times 0.5615 \\approx 2.25\n$$ W(u) \\approx \\ln \\frac {2.25 kD t} {r^2 S} $$\nand so the drawdown approximation becomes\n$$ s \\approx \\frac Q {4 \\pi kD} \\ln \\frac {2.25 kD t} {r^2 S} $$\nThe condition u<0.1 can be translated to $\\frac {r^2 S} {4 kD t} < 0.1$ or\n$$\\frac t {r^2} > 2.5 \\frac {S} {kD}$$\nRadius of influence\nThe previous logarithmic drawdown type curve versus $1/u$ can be seen an image of the drawdown for a fixed distance and varying time. This is because $1/u$ is proportional to the real time. On the other hand, the drawdown type curve versus u may be regarded as the drawdown at a fixed time for varying distance. This follows from\ns versus u is\n$$ W(u)\\approx \\ln \\frac {2.25 kD t} { r^2 S} \\,\\,\\,\\, versus\\,\\,\\,\\, u = \\ln \\frac {r^2 S} {4 kD t} = 2 \\ln \\left( \\frac {S} {4 kD t} r\\right) $$\nThat is, proportional r on log scale. The plot reveals this:", "U = np.logspace(-7, 1, 81)\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.set(xlabel='u', ylabel='W(u)', title='Theis type curve and its logarithmic approximation', yscale='linear', xscale='log')\nax.grid(True)\nax.plot(U, W(U), 'b', linewidth = 2., label='Theis type curve')\nax.plot(U, Wa(U), 'r', linewidth = 0.25, label='log approximation')\nax.invert_yaxis()\nplt.legend(loc='best')\nplt.show()", "This shows that the radius of influence is limited. We can now approximate this radius of influence by saying that the radius is where the appoximated Theis curve, that is the straight red line in the graph intersects the zero drawdown, i.e. $W(u) = 0$.\nHence, for the radius of influence, R, we have\n$$ \\ln \\frac {2.25 kD t} {R^2 S} = 0 $$\nimpying that\n$$ \\frac {2.25 kD t } { R^2 S } = 1 $$\n$$ R =\\sqrt { \\frac {2.25 kD t} D} $$\nwith R the radius of influence. Computing the radius of influence is an easy way to determine how far out the drawdown affects the groundwater heads.\nPumping test\nIntroduction\nBelow are the data given that were obtained from a pumping test carried out on the site \"Oude Korendijk\" south of Rotterdam in the Netherlands (See Kruseman and De Ridder, p56, 59). The piezometers are all open at 20 m below ground surface. The groundwater head is shallow, within a m from ground surface. The first18 m below ground surface consist of clay,peat and clayey fine sand. These layers form a practially impermeable confining unit. Below this, between 18 and25 m below ground surface are 7 m of sand an some gravel, that form the aquifer. Fine sandy and clayey sediments thereunder from the base of the aquifer, which is considered impermeable.\nPiezometers wer installed at 30, 90 and 215 m from the well, open at 20 m below ground surface. The well has its screen installed over the whole thickness of the aquifer. We consider the aquifer as confined with no leakage. But we should look with a critical eye that the drawdown curves to verify to what extent this assumption holds true.\nThe drawdown data for the three piezometers is given below. The first column is time after the start of the pump in minutes; the second column is the drawdown in m.\nThe well extracts 788 m3/d\nThe objective of the pumping test is to determine the properties kD and S of the aquifer.\nThe data:", "# t[min], s[m]\nH30 = [ [0.0, 0.0],\n [0.1, 0.04],\n [0.25, 0.08],\n [0.50, 0.13],\n [0.70, 0.18],\n [1.00, 0.23],\n [1.40, 0.28],\n [1.90, 0.33],\n [2.33, 0.36],\n [2.80, 0.39],\n [3.36, 0.42],\n [4.00, 0.45],\n [5.35, 0.50],\n [6.80, 0.54],\n [8.30, 0.57],\n [8.70, 0.58],\n [10.0, 0.60],\n [13.1, 0.64]]\n\n# t[min], s[m]\nH90= [[0.0, 0.0],\n [1.5, 0.015],\n [2.0, 0.021],\n [2.16, 0.23],\n [2.66, 0.044],\n [3.00, 0.054],\n [3.50, 0.075],\n [4.00, 0.090],\n [4.33, 0.104],\n [5.50, 0.133],\n [6.0, 0.154],\n [7.5, 0.178],\n [9.0, 0.206],\n [13.0, 0.250],\n [15.0, 0.275],\n [18.0, 0.305],\n [25.0, 0.348],\n [30.0, 0.364]]\n\n# t[min], s[m]\nH215=[[0.0, 0.0],\n [66.0, 0.089],\n [127., 0.138],\n [185., 0.165],\n [251., 0.186]]", "To work out the test:\n\n\nShow the drawdown data on half-log time scale for the three piezometers.\n\n\nWhat do you expect these curves to look like? Do the drawdown lines become parallel after an initial time?\n\n\nUse the simplified drawdown formula to interpret the test.\n\nLook where the simplified drawdown curves become zero.\nDetermine the drawdown increase per log-cycle of time.\n4 From this information determine the transmissivity and the storage coefficient.\n\n\n\nShow the drawdown $s$ versus time on a double log graph for all three piezometers\n\nShow the drawdown $s$ versus t for all three piezometers\n\nShow the drawdown $s$ versus $ t/r^2 $, also for all three piezometers.\n\n\nWhat is the difference between the last two graphs (versus $t/r^2$ instead of versus $t$) ?\n\n\nMatch the computed drawdown by plotting it on the same graph and adapting the transmissivity and the storage coefficient.\n\n\nFor your analysis, write the computed drawdown as follows:\n$$ s = A\\times W(u \\times B) $$\nThen adapting A willshift the entire graph vertically, while adaptin B will shift it horizontally. This makes it easy to lay the computed curve on the data points.\n\n\nBy adapting A and B determine their numerical values.\n\n\nDetermine the transmissivity kD and the storage coefficient S from A and B.\n\n\nHint:\nHaving determined A and B compute kD and S by matching this formula with the true Theis drawdown which is\n$$ s = \\frac Q {4 \\pi kD} W(\\frac {r^2 S} { 4 kD t}) $$\nThat is, to make both equations equal, set\n$$ A = \\frac Q {4 \\pi kD} $$\nand set\n$$ \\frac 1 {u\\, B} = \\frac {4 kD} S \\frac t {r^2} $$\nExercises\n\nConsider an aquifer with constant transmissivity kD = 900 m2/d, phreatic storage coefficient S with a well that extracts Q = 2400 m3/d.\n Use distances between 1 and 1000 m\n Use times between 0.01 and 100 days\nShow the drawdown as a function of time for different distances where the drawdown is on linear scale and the time at logarithmic scale.\n Plot both the full Theis drawdown and the approximation.\nShow the drawdown as a function of distance for different times where the drawdown is at linear scale and the distance is at logarithmic scale.\n Plot both the full Theis drawdown and the approximation.\nAlso show the radius of influence for the different times on the drawdown versus distance curve. Just show them as red dots at s=0 and r the radius of influence.\nA well is installed in a desert are where there are no fixed head boundaries. The well is 60 m deep and the aquifer is 80 m thick. The top of the well screen is at 40 m below ground surface and the phreatic water level at 15 m. Test pumping has revealed that the $kD = 600$ m2/d and the specific yield $Sy = 0.25$.\n As a first approximation ignore that the fact that the aquifer thickness gets less because of the falling water table caused by pumping.\n 1. How much can we pump until the head drops to halfway the screen in 3 years?\n 1. How much if it were 30 years?\n 1. How much could we pump over the same period had we drilled three of the same wells on one line at 250 m mutual distance?\nAt an historic site the excavations need to be carried out in the dry. The water levels have gone up over the last 40 years due to a large reservoir that had been installed in the river by building a dam downstream. The water level needs to be lowerd by 10 m over a square are with sides of 30 m. The transmissivity is around 1200 m2/d according to test pumping and the specific yield about $Sy = 24% $.\nDetermine the pump capacity if 4 wells are used in the corners of the exacvation which needs to be dry only two weeks after they are installed.\nThe excavations will last for 1 year. What will be the required pump capacity after this year?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/analytics-componentized-patterns
retail/recommendation-system/bqml-scann/05_deploy_lookup_and_scann_caip.ipynb
apache-2.0
[ "Part 5: Deploy the solution to AI Platform Prediction\nThis notebook is the fifth of five notebooks that guide you through running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution.\nUse this notebook to complete the following tasks:\n\nDeploy the embedding lookup model to AI Platform Prediction. \nDeploy the ScaNN matching service to AI Platform Prediction by using a custom container. The ScaNN matching service is an application that wraps the ANN index model and provides additional functionality, like mapping item IDs to item embeddings.\nOptionally, export and deploy the matrix factorization model to AI Platform for exact matching.\n\nBefore starting this notebook, you must run the 04_build_embeddings_scann notebook to build an approximate nearest neighbor (ANN) index for the item embeddings.\nSetup\nImport the required libraries, configure the environment variables, and authenticate your GCP account.\nImport libraries", "import numpy as np\nimport tensorflow as tf", "Configure GCP environment settings\nUpdate the following variables to reflect the values for your GCP environment:\n\nPROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.\nPROJECT_NUMBER: The number of the Google Cloud project you are using to implement this solution. You can find this in the Project info card on the project dashboard page.\nBUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.\nREGION: The region to use for the AI Platform Prediction job.", "PROJECT_ID = 'yourProject' # Change to your project.\nPROJECT_NUMBER = 'yourProjectNumber' # Change to your project number\nBUCKET = 'yourBucketName' # Change to the bucket you created.\nREGION = 'yourPredictionRegion' # Change to your AI Platform Prediction region.\nARTIFACTS_REPOSITORY_NAME = 'ml-serving'\n\nEMBEDDNIG_LOOKUP_MODEL_OUTPUT_DIR = f'gs://{BUCKET}/bqml/embedding_lookup_model'\nEMBEDDNIG_LOOKUP_MODEL_NAME = 'item_embedding_lookup'\nEMBEDDNIG_LOOKUP_MODEL_VERSION = 'v1'\n\nINDEX_DIR = f'gs://{BUCKET}/bqml/scann_index'\nSCANN_MODEL_NAME = 'index_server'\nSCANN_MODEL_VERSION = 'v1'\n\nKIND = 'song'\n\n!gcloud config set project $PROJECT_ID", "Authenticate your GCP account\nThis is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.", "try:\n from google.colab import auth\n auth.authenticate_user()\n print(\"Colab user is authenticated.\")\nexcept: pass", "Deploy the embedding lookup model to AI Platform Prediction\nCreate the embedding lookup model resource in AI Platform:", "!gcloud ai-platform models create {EMBEDDNIG_LOOKUP_MODEL_NAME} --region={REGION}", "Next, deploy the model:", "!gcloud ai-platform versions create {EMBEDDNIG_LOOKUP_MODEL_VERSION} \\\n --region={REGION} \\\n --model={EMBEDDNIG_LOOKUP_MODEL_NAME} \\\n --origin={EMBEDDNIG_LOOKUP_MODEL_OUTPUT_DIR} \\\n --runtime-version=2.2 \\\n --framework=TensorFlow \\\n --python-version=3.7 \\\n --machine-type=n1-standard-2\n\nprint(\"The model version is deployed to AI Platform Prediction.\")", "Once the model is deployed, you can verify it in the AI Platform console.\nTest the deployed embedding lookup AI Platform Prediction model\nSet the AI Platform Prediction API information:", "import googleapiclient.discovery\nfrom google.api_core.client_options import ClientOptions\n\napi_endpoint = f'https://{REGION}-ml.googleapis.com'\nclient_options = ClientOptions(api_endpoint=api_endpoint)\nservice = googleapiclient.discovery.build(\n serviceName='ml', version='v1', client_options=client_options)", "Run the caip_embedding_lookup method to retrieve item embeddings. This method accepts item IDs, calls the embedding lookup model in AI Platform Prediction, and returns the appropriate embedding vectors.", "def caip_embedding_lookup(input_items):\n request_body = {'instances': input_items}\n service_name = f'projects/{PROJECT_ID}/models/{EMBEDDNIG_LOOKUP_MODEL_NAME}/versions/{EMBEDDNIG_LOOKUP_MODEL_VERSION}'\n print(f'Calling : {service_name}')\n response = service.projects().predict(\n name=service_name, body=request_body).execute()\n\n if 'error' in response:\n raise RuntimeError(response['error'])\n\n return response['predictions']", "Test the caip_embedding_lookup method with three item IDs:", "input_items = ['2114406', '2114402 2120788', 'abc123']\n\nembeddings = caip_embedding_lookup(input_items)\nprint(f'Embeddings retrieved: {len(embeddings)}')\nfor idx, embedding in enumerate(embeddings):\n print(f'{input_items[idx]}: {embedding[:5]}')", "ScaNN matching service\nThe ScaNN matching service performs the following steps:\n\nReceives one or more item IDs from the client.\nCalls the embedding lookup model to fetch the embedding vectors of those item IDs.\nUses these embedding vectors to query the ANN index to find approximate nearest neighbor embedding vectors.\nMaps the approximate nearest neighbors embedding vectors to their corresponding item IDs.\nSends the item IDs back to the client.\n\nWhen the client receives the item IDs of the matches, the song title and artist information is fetched from Datastore in real-time to be displayed and served to the client application.\nNote: In practice, recommendation systems combine matches (from one or more indices) with user-provided filtering clauses (like where price <= value and colour =red), as well as other item metadata (like item categories, popularity, and recency) to ensure recommendation freshness and diversity. In addition, ranking is commonly applied after generating the matches to decide the order in which they are served to the user. \nScaNN matching service implementation\nThe ScaNN matching service is implemented as a Flask application that runs on a gunicorn web server. This application is implemented in the main.py module.\nThe ScaNN matching service application works as follows:\n\nUses environmental variables to set configuration information, such as the Google Cloud location of the ScaNN index to load.\nLoads the ScaNN index as the ScaNNMatcher object is initiated.\n\nAs required by AI Platform Prediction, exposes two HTTP endpoints:\n\nhealth: a GET method to which AI Platform Prediction sends health checks.\npredict: a POST method to which AI Platform Prediction forwards prediction requests.\n\nThe predict method expects JSON requests in the form {\"instances\":[{\"query\": \"item123\", \"show\": 10}]}, where query represents the item ID to retrieve matches for, and show represents the number of matches to retrieve.\nThe predict method works as follows:\n1. Validates the received request object.\n1. Extracts the `query` and `show` values from the request object.\n1. Calls `embedding_lookup.lookup` with the given query item ID to get its embedding vector from the embedding lookup model.\n1. Calls `scann_matcher.match` with the query item embedding vector to retrieve its approximate nearest neighbor item IDs from the ANN Index.\n\nThe list of matching item IDs are put into JSON format and returned as the response of the predict method.\n\n\nDeploy the ScaNN matching service to AI Platform Prediction\nPackage the ScaNN matching service application in a custom container and deploy it to AI Platform Prediction.\nCreate an Artifact Registry for the Docker container image", "!gcloud beta artifacts repositories create {ARTIFACTS_REPOSITORY_NAME} \\\n --location={REGION} \\\n --repository-format=docker\n\n!gcloud beta auth configure-docker {REGION}-docker.pkg.dev --quiet", "Use Cloud Build to build the Docker container image\nThe container runs the gunicorn HTTP web server and executes the Flask app variable defined in the main.py module.\nThe container image to deploy to AI Platform Prediction is defined in a Dockerfile, as shown in the following code snippet:\n```\nFROM python:3.8-slim\nCOPY requirements.txt .\nRUN pip install -r requirements.txt\nCOPY . ./\nARG PORT\nENV PORT=$PORT\nCMD exec gunicorn --bind :$PORT main:app --workers=1 --threads 8 --timeout 1800\n```\nBuild the container image by using Cloud Build and specifying the cloudbuild.yaml file:", "IMAGE_URL = f'{REGION}-docker.pkg.dev/{PROJECT_ID}/{ARTIFACTS_REPOSITORY_NAME}/{SCANN_MODEL_NAME}:{SCANN_MODEL_VERSION}'\nPORT=5001\n\nSUBSTITUTIONS = ''\nSUBSTITUTIONS += f'_IMAGE_URL={IMAGE_URL},'\nSUBSTITUTIONS += f'_PORT={PORT}'\n\n!gcloud builds submit --config=index_server/cloudbuild.yaml \\\n --substitutions={SUBSTITUTIONS} \\\n --timeout=1h", "Run the following command to verify the container image has been built:", "repository_id = f'{REGION}-docker.pkg.dev/{PROJECT_ID}/{ARTIFACTS_REPOSITORY_NAME}'\n\n!gcloud beta artifacts docker images list {repository_id}", "Create a service account for AI Platform Prediction\nCreate a service account to run the custom container. This is required in cases where you want to grant specific permissions to the service account.", "SERVICE_ACCOUNT_NAME = 'caip-serving'\nSERVICE_ACCOUNT_EMAIL = f'{SERVICE_ACCOUNT_NAME}@{PROJECT_ID}.iam.gserviceaccount.com'\n!gcloud iam service-accounts create {SERVICE_ACCOUNT_NAME} \\\n --description=\"Service account for AI Platform Prediction to access cloud resources.\" ", "Grant the Cloud ML Engine (AI Platform) service account the iam.serviceAccountAdmin privilege, and grant the caip-serving service account the privileges required by the ScaNN matching service, which are storage.objectViewer and ml.developer.", "!gcloud projects describe {PROJECT_ID} --format=\"value(projectNumber)\"\n\n!gcloud projects add-iam-policy-binding {PROJECT_ID} \\\n --role=roles/iam.serviceAccountAdmin \\\n --member=serviceAccount:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com\n\n!gcloud projects add-iam-policy-binding {PROJECT_ID} \\\n --role=roles/storage.objectViewer \\\n --member=serviceAccount:{SERVICE_ACCOUNT_EMAIL}\n \n!gcloud projects add-iam-policy-binding {PROJECT_ID} \\\n --role=roles/ml.developer \\\n --member=serviceAccount:{SERVICE_ACCOUNT_EMAIL}", "Deploy the custom container to AI Platform Prediction\nCreate the ANN index model resource in AI Platform:", "!gcloud ai-platform models create {SCANN_MODEL_NAME} --region={REGION}", "Deploy the custom container to AI Platform prediction. Note that you use the env-vars parameter to pass environmental variables to the Flask application in the container.", "HEALTH_ROUTE=f'/v1/models/{SCANN_MODEL_NAME}/versions/{SCANN_MODEL_VERSION}'\nPREDICT_ROUTE=f'/v1/models/{SCANN_MODEL_NAME}/versions/{SCANN_MODEL_VERSION}:predict'\n\nENV_VARIABLES = f'PROJECT_ID={PROJECT_ID},'\nENV_VARIABLES += f'REGION={REGION},'\nENV_VARIABLES += f'INDEX_DIR={INDEX_DIR},'\nENV_VARIABLES += f'EMBEDDNIG_LOOKUP_MODEL_NAME={EMBEDDNIG_LOOKUP_MODEL_NAME},'\nENV_VARIABLES += f'EMBEDDNIG_LOOKUP_MODEL_VERSION={EMBEDDNIG_LOOKUP_MODEL_VERSION}'\n\n!gcloud beta ai-platform versions create {SCANN_MODEL_VERSION} \\\n --region={REGION} \\\n --model={SCANN_MODEL_NAME} \\\n --image={IMAGE_URL} \\\n --ports={PORT} \\\n --predict-route={PREDICT_ROUTE} \\\n --health-route={HEALTH_ROUTE} \\\n --machine-type=n1-standard-4 \\\n --env-vars={ENV_VARIABLES} \\\n --service-account={SERVICE_ACCOUNT_EMAIL}\n\nprint(\"The model version is deployed to AI Platform Prediction.\")", "Test the Deployed ScaNN Index Service\nAfter deploying the custom container, test it by running the caip_scann_match method. This method accepts the parameter query_items, whose value is converted into a space-separated string of item IDs and treated as a single query. That is, a single embedding vector is retrieved from the embedding lookup model, and similar item IDs are retrieved from the ScaNN index given this embedding vector.", "from google.cloud import datastore\nimport requests\nclient = datastore.Client(PROJECT_ID)\n\ndef caip_scann_match(query_items, show=10):\n request_body = {\n 'instances': [{\n 'query':' '.join(query_items), \n 'show':show\n }]\n }\n \n service_name = f'projects/{PROJECT_ID}/models/{SCANN_MODEL_NAME}/versions/{SCANN_MODEL_VERSION}'\n print(f'Calling: {service_name}') \n response = service.projects().predict(\n name=service_name, body=request_body).execute()\n\n if 'error' in response:\n raise RuntimeError(response['error'])\n\n match_tokens = response['predictions']\n keys = [client.key(KIND, int(key)) for key in match_tokens]\n items = client.get_multi(keys)\n return items\n", "Call the caip_scann_match method with five item IDs and request five match items for each:", "songs = {\n '2120788': 'Limp Bizkit: My Way',\n '1086322': 'Jacques Brel: Ne Me Quitte Pas',\n '833391': 'Ricky Martin: Livin\\' la Vida Loca',\n '1579481': 'Dr. Dre: The Next Episode',\n '2954929': 'Black Sabbath: Iron Man'\n}\n\nfor item_Id, desc in songs.items():\n print(desc)\n print(\"==================\")\n similar_items = caip_scann_match([item_Id], 5)\n for similar_item in similar_items:\n print(f'- {similar_item[\"artist\"]}: {similar_item[\"track_title\"]}')\n print()", "(Optional) Deploy the matrix factorization model to AI Platform Prediction\nOptionally, you can deploy the matrix factorization model in order to perform exact item matching. The model takes Item1_Id as an input and outputs the top 50 recommended item2_Ids.\nExact matching returns better results, but takes significantly longer than approximate nearest neighbor matching. You might want to use exact item matching in cases where you are working with a very small data set and where latency isn't a primary concern.\nExport the model from BigQuery ML to Cloud Storage as a SavedModel", "BQ_DATASET_NAME = 'recommendations'\nBQML_MODEL_NAME = 'item_matching_model'\nBQML_MODEL_VERSION = 'v1' \nBQML_MODEL_OUTPUT_DIR = f'gs://{BUCKET}/bqml/item_matching_model'\n\n!bq --quiet extract -m {BQ_DATASET_NAME}.{BQML_MODEL_NAME} {BQML_MODEL_OUTPUT_DIR}\n\n!saved_model_cli show --dir {BQML_MODEL_OUTPUT_DIR} --tag_set serve --signature_def serving_default", "Deploy the exact matching model to AI Platform Prediction", "!gcloud ai-platform models create {BQML_MODEL_NAME} --region={REGION}\n\n!gcloud ai-platform versions create {BQML_MODEL_VERSION} \\\n --region={REGION} \\\n --model={BQML_MODEL_NAME} \\\n --origin={BQML_MODEL_OUTPUT_DIR} \\\n --runtime-version=2.2 \\\n --framework=TensorFlow \\\n --python-version=3.7 \\\n --machine-type=n1-standard-2\n\nprint(\"The model version is deployed to AI Platform Predicton.\")\n\ndef caip_bqml_matching(input_items, show):\n request_body = {'instances': input_items}\n service_name = f'projects/{PROJECT_ID}/models/{BQML_MODEL_NAME}/versions/{BQML_MODEL_VERSION}'\n print(f'Calling : {service_name}')\n response = service.projects().predict(\n name=service_name, body=request_body).execute()\n\n if 'error' in response:\n raise RuntimeError(response['error'])\n \n \n match_tokens = response['predictions'][0][\"predicted_item2_Id\"][:show]\n keys = [client.key(KIND, int(key)) for key in match_tokens]\n items = client.get_multi(keys)\n return items\n\n return response['predictions']\n\nfor item_Id, desc in songs.items():\n print(desc)\n print(\"==================\")\n similar_items = caip_bqml_matching([int(item_Id)], 5)\n for similar_item in similar_items:\n print(f'- {similar_item[\"artist\"]}: {similar_item[\"track_title\"]}')\n print()", "License\nCopyright 2020 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \nSee the License for the specific language governing permissions and limitations under the License.\nThis is not an official Google product but sample code provided for an educational purpose" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vravishankar/Jupyter-Books
pandas/01.Pandas - Series Object.ipynb
mit
[ "Pandas\nPandas is a high-performance python library that provides a comprehensive set of data structures for manipulating tabular data, providing high-performance indexing, automatic alignment, reshaping, grouping, joining and statistical analyses capabilities.\nThe two primary data structures in pandas are the Series and the DataFrame objects.\nSeries Object\nThe Series object is the fundamental building block of pandas. A Series represents an one-dimensional array based on the NumPy ndarray but with a labeled index that significantly helps to access the elements.\nA Series always has an index even if one is not specified, by default pandas will create an index that consists of sequential integers starting from zero. Access to elements is not by integer position but using values in the index referred as Labels.\nImporting pandas into the application is simple. It is common to import both pandas and numpy with their objects mapped into the pd and np namespaces respectively.", "import numpy as np\nimport pandas as pd\npd.__version__\n\nnp.__version__\n\n# set some options to control output display\npd.set_option('display.notebook_repr_html',False)\npd.set_option('display.max_columns',10)\npd.set_option('display.max_rows',10)", "Creating Series\nA Series can be created and initialised by passing either a scalar value, a NumPy nd array, a Python list or a Python Dict as the data parameter of the Series constructor.", "# create one item series\ns1 = pd.Series(1)\ns1", "'0' is the index and '1' is the value. The data type (dtype) is also shown. We can also retrieve the value using the associated index.", "# get value with label 0\ns1[0]\n\n# create from list\ns2 = pd.Series([1,2,3,4,5])\ns2\n\n# get the values in the series\ns2.values\n\n# get the index of the series\ns2.index", "Creating Series with named index\nPandas will create different index types based on the type of data identified in the index parameter. These different index types are optimized to perform indexing operations for that specific data type. To specify the index at the time of creation of the Series, use the index parameter of the constructor.", "# explicitly create an index\n# index is alpha, not an integer\ns3 = pd.Series([1,2,3], index=['a','b','c'])\ns3\n\ns3.index", "Please note the type of the index items. It is not string but 'object'.", "# look up by label value and not object position\ns3['b']\n\n# position also works\ns3[2]\n\n# create series from an existing index\n# scalar value will be copied at each index label\ns4 = pd.Series(2,index=s2.index)\ns4", "It is a common practice to initialize the Series objects using NumPy ndarrays, and with various NumPy functions that create arrays. The following code creates a Series from five normally distributed values:", "np.random.seed(123456)\npd.Series(np.random.randn(5))\n\n# 0 through 9\npd.Series(np.linspace(0,9,10))\n\n# o through 8\npd.Series(np.arange(0,9))", "A Series can also be created from a Python dictionary. The keys of the dictionary are used as the index lables for the Series:", "s6 = pd.Series({'a':1,'b':2,'c':3,'d':4})\ns6", "Size, Shape, Count and Uniqueness of Values", "# example series which also contains a NaN\ns = pd.Series([0,1,1,2,3,4,5,6,7,np.NaN])\ns\n\n# length of the Series\nlen(s)\n\ns.size\n\n# shape is a tuple with one value\ns.shape\n\n# number of values not part of NaN can be found using count() method\ns.count()\n\n# all unique values\ns.unique()\n\n# count of non-NaN values, returned max to min order\ns.value_counts()", "Peeking at data with heads, tails and take\npandas provides the .head() and .tail() methods to examine just the first few or last records in a Series. By default, these return the first five or last rows respectively, but you can use the n parameter or just pass an integer to specify the number of rows:", "# first five\ns.head()\n\n# first three\ns.head(3)\n\n# last five\ns.tail()\n\n# last 2\ns.tail(n=2) # equivalent to s.tail(2)", "The .take() method will return the rows in a series that correspond to the zero-based positions specified in a list:", "# only take specific items\ns.take([0,3,9])", "Looking up values in Series\nValues in a Series object can be retrieved using the [] operator and passing either a single index label or a list of index labels.", "# single item lookup\ns3['a']\n\n# lookup by position since index is not an integer\ns3[2]\n\n# multiple items\ns3[['a','c']]\n\n# series with an integer index but not starting with 0\ns5 = pd.Series([1,2,3], index =[11,12,13])\ns5[12] # by value as value passed and index are both integer", "To alleviate the potential confusion in determining the label-based lookups versus position-based lookups, index based lookup can be enforced using the .loc[] accessor:", "# force lookup by index label\ns5.loc[12]", "Lookup by position can be enforced using the iloc[] accessor:", "# force lookup by position or location\ns5.iloc[1]\n\n# multiple items by index label\ns5.loc[[12,10]]\n\n# multiple items by position or location\ns5.iloc[[1,2]]", "If a location / position passed to .iloc[] in a list is out of bounds, an exception will be thrown. This is different than with .loc[], which if passed a label that does not exist, will return NaN as the value for that label:", "s5.loc[[12,-1,15]]", "A Series also has a property .ix that can be used to look up items either by label or by zero-based array position.", "s3\n\n# label based lookup\ns3.ix[['a','b']]\n\n# position based lookup\ns3.ix[[1,2]]", "This can become complicated if the indexes are integers and you pass a list of integers to ix. Since they are of the same type, the lookup will be by index label instead of position:", "# this looks by label and not position\n# note that 1,2 have NaN as those labels do not exist in the index\ns5.ix[[1,2,10,11]]", "Alignment via index labels\nA fundamental difference between a NumPy ndarray and a pandas Series is the ability of a Series to automatically align data from another Series based on label values before performing an operation.", "s6 = pd.Series([1,2,3,4], index=['a','b','c','d'])\ns6\n\ns7 = pd.Series([4,3,2,1], index=['d','c','b','a'])\ns7\n\ns6 + s7", "This is a very different result that what it would have been if it were two pure NumPy arrays being added. A NumPy ndarray would add the items in identical positions of each array resulting in different values.", "a1 = np.array([1,2,3,4,5])\na2 = np.array([5,4,3,2,1])\na1 + a2", "The process of adding two Series objects differs from the process of addition of arrays as it first aligns data based on index label values instead of simply applying the operation to elements in the same position. This becomes significantly powerful when using pandas Series to combine data based on labels instead of having to first order the data manually.\nArithmetic Operations\nArithemetic Operations <pre>(+,-,*,/)</pre> can be applied either to a Series or between 2 Series objects", "# multiply all values in s3 by 2\ns3 * 2\n\n# scalar series using the s3's index\n# not efficient as it will no use vectorisation\nt = pd.Series(2,s3.index)\ns3 * t", "To reinforce the point that alignment is being performed when applying arithmetic operations across two Series objects, look at the following two Series as examples:", "# we will add this to s9\ns8 = pd.Series({'a':1,'b':2,'c':3,'d':5})\ns8\n\ns9 = pd.Series({'b':6,'c':7,'d':9,'e':10})\ns9\n\n# NaN's result for a and e demonstrates alignment\ns8 + s9\n\ns10 = pd.Series([1.0,2.0,3.0],index=['a','a','b'])\ns10\n\ns11 = pd.Series([4.0,5.0,6.0], index=['a','a','c'])\ns11\n\n# will result in four 'a' index labels\ns10 + s11", "The reason for the above result is that during alignment, pandas actually performs a cartesian product of the sets of all the unique index labels in both Series objects, and then applies the specified operation on all items in the products.\nTo explain why there are four 'a' index values s10 contains two 'a' labels and s11 also contains two 'a' labels. Every combination of 'a' labels in each will be calculated resulting in four 'a' labels. There is one 'b' label from s10 and one 'c' label from s11. Since there is no matching label for either in the other Series object, they only result in a sing row in the resulting Series object.\nEach combination of values for 'a' in both Series are computed, resulting in the four values: 1+4,1+5,2+4 and 2+5.\nSo remember that an index can have duplicate labels, and during alignment this will result in a number of index labels equivalent to the products of the number of the labels in each Series.\nThe special case of Not-A-Number (NaN)\npandas mathematical operators and functions handle NaN in a special manner (compared to NumPy ndarray) that does not break the computations. pandas is lenient with missing data assuming that it is a common situation.", "nda = np.array([1,2,3,4,5])\nnda.mean()\n\n# mean of numpy array values with a NaN\nnda = np.array([1,2,3,4,np.NaN])\nnda.mean()\n\n# Series object ignores NaN values - does not get factored\ns = pd.Series(nda)\ns.mean()\n\n# handle NaN values like Numpy\ns.mean(skipna=False)", "Boolean selection\nItems in a Series can be selected, based on the value instead of index labels, via the utilization of a Boolean selection.", "# which rows have values that are > 5\ns = pd.Series(np.arange(0,10))\ns > 5\n\n# select rows where values are > 5\n# overloading the Series object [] operator\nlogicalResults = s > 5\ns[logicalResults]\n\n# a little shorter version\ns[s > 5]\n\n# using & operator\ns[(s>5)&(s<9)]\n\n# using | operator\ns[(s > 3) | (s < 5)]\n\n# are all items >= 0?\n(s >=0).all()\n\n# are any items < 2\ns[s < 2].any()", "The result of these logical expressions is a Boolean selection, a Series of True and False values. The .sum() method of a Series, when given a series of Boolean values, will treat True as 1 and False as 0. The following demonstrates using this to determine the number of items in a Series that satisfy a given expression:", "(s < 2).sum()", "Reindexing a Series\nReindexing in pandas is a process that makes the data in a Series or DataFrame match a given set of labels.\nThis process of performing a reindex includes the following steps:\n1. Reordering existing data to match a set of labels.\n2. Inserting NaN markers where no data exists for a label.\n3. Possibly, filling missing data for a label using some type of logic", "# sample series of five items\ns = pd.Series(np.random.randn(5))\ns\n\n# change the index\ns.index = ['a','b','c','d','e']\ns\n\n# concat copies index values verbatim\n# potentially making duplicates\nnp.random.seed(123456)\ns1 = pd.Series(np.random.randn(3))\ns2 = pd.Series(np.random.randn(3))\ncombined = pd.concat([s1,s2])\ncombined\n\n# reset the index\ncombined.index = np.arange(0,len(combined))\ncombined", "Greater flexibility in creating a new index is provided using the .reindex() method. An example of the flexibility of .reindex() over assigning the .index property directly is that the list provided to .reindex() can be of a different length than the number of rows in the Series:", "np.random.seed(123456)\ns1 = pd.Series(np.random.randn(4),['a','b','c','d'])\n# reindex with different number of labels\n# results in dropped rows and/or NaN's\ns2 = s1.reindex(['a','c','g'])\ns2", "There are several things here that are important to point out about .reindex() method.\n\nFirst is that the result of .reindex() method is a new Series. This new Series has an index with labels that are provided as parameter to reindex().\nFor each item in the given parameter list, if the original Series contains that label, then the value is assigned to that label.\nIf that label does not exist in the original Series, pandas assigns a NaN value.\nRows in the Series without a label specified in the parameter of .reindex() is not included in the result.\n\nTo demonstrate that the result of .reindex() is a new Series object, changing a value in s2 does not change the values in s1:", "# s2 is a different series than s1\ns2['a'] = 0\ns2\n\n# this did not modify s1\ns1", "Reindex is also useful when you want to align two Series to perform an operation on matching elements from each series; however, for some reason, the two Series has index labels that will not initially align.", "# different types for the same values of labels causes big issue\ns1 = pd.Series([0,1,2],index=[0,1,2])\ns2 = pd.Series([3,4,5],index=['0','1','2'])\ns1 + s2", "The reason why this happens in pandas are as follows:\n\npandas first tries to align by the indexes and finds no matches, so it copies the index labels from the first series and tries to append the indexes from the second Series.\nHowever, since they are different type, it defaults back to zero-based integer sequence that results in duplicate values.\nFinally, all values are NaN because the operation tries to add the item in the first Series with the integer label 0, which has a value of 0, but can't find the item in the other series and therefore the result in NaN.", "# reindex by casting the label types and we will get the desired result\ns2.index = s2.index.values.astype(int)\ns1 + s2", "The default action of inserting NaN as a missing value during reindexing can be changed by using the fill_value parameter of the method.", "# fill with 0 instead on NaN\ns2 = s.copy()\ns2.reindex(['a','f'],fill_value=0)", "When performing a reindex on ordered data such as a time series, it is possible to perform interpolation or filling of values. The following example demonstrates forward filling, often referred to as \"last known value\".", "# create example to demonstrate fills\ns3 = pd.Series(['red','green','blue'],index=[0,3,5])\ns3\n\n# forward fill using ffill method\ns3.reindex(np.arange(0,7), method='ffill')\n\n# backward fill using bfill method\ns3.reindex(np.arange(0,7),method='bfill')", "Modifying a Series in-place\nThere are several ways that an existing Series can be modified in-place having either its values changed or having rows added or deleted.\nA new item can be added to a Series by assigning a value to an index label that does not already exist.", "np.random.seed(123456)\ns = pd.Series(np.random.randn(3),index=['a','b','c'])\ns\n\n# change a value in the Series\n# this done in-place\n# a new Series is not returned that has a modified value\ns['d'] = 100\ns\n\n# value at a specific index label can be changed by assignment:\ns['d'] = -100\ns", "Items can be removed from a Series using the del() function and passing the index label(s) to be removed.", "del(s['a'])\ns", "Slicing a Series", "# a series to use for slicing\n# using index labels not starting at 0 to demonstrate\n# position based slicing\n\ns = pd.Series(np.arange(100,110),index=np.arange(10,20))\ns\n\n# items at position 0,2,4\ns[0:6:2]\n\n# equivalent to\ns.iloc[[0,2,4]]\n\n# first five by slicing, same as .head(5)\ns[:5]\n\n# fourth position to the end\ns[4:]\n\n# every other item in the first five positions\ns[:5:2]\n\n# every other item starting at the fourth position\ns[4::2]\n\n# reverse the series\ns[::-1]\n\n# every other starting at position 4, in reverse\ns[4::-2]\n\n# :-2 which means positions 0 through (10-2) which is [8]\ns[:-2]\n\n# last 3 items\n# equivalent to tail(3)\ns[-3:]\n\n# equivalent to s.tail(4).head(3)\ns[-4:-1]", "An important thing to keep in mind when using slicing, is that the result of the slice is actually a view into the original Series. Modification of values through the result of the slice will modify the original Series.", "# preserve s\n# slice with first 2 rows\ncopy = s.copy()\nslice = copy[:2]\nslice", "Now the assignment of a value to an element of a slice will change the value in the original Series:", "slice[11] = 1000\ncopy", "Slicing can be performed on Series objects with a non-integer index.", "# used to demonstrate the next two slices\ns = pd.Series(np.arange(0,5),index=['a','b','c','d','e'])\ns\n\n# slicing with integer values will extract items based on position:\ns[1:3]\n\n# with non-integer index, it is also possible to slice with values in the same type of the index:\ns['b':'d']" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ramseylab/networkscompbio
class10_closeness_python3_template.ipynb
apache-2.0
[ "CS446/546 - Class Session 10 - Closeness centrality\nIn this class session we are going to scatter-plot the harmonic-mean closeness centralities\nof the vertices in the gene regulatory network (which we will obtain from Pathway Commons) with the vertices' degree centralities. We will get the geodesic path distances using igraph, which will use BFS for this graph.\nWe are going to use pandas, igraph, numpy, and timeit", "import pandas\nimport igraph\nimport numpy\nimport timeit", "Load in the SIF file for Pathway Commons, using pandas.read_csv and specifying the three column names species1, interaction_type, and species2:", "sif_data = pandas.read_csv(\"shared/pathway_commons.sif\",\n sep=\"\\t\", names=[\"species1\",\"interaction_type\",\"species2\"])", "Subset the data frame to include only rows for which the interaction_type column contains the string controls-expression-of; subset columns to include only columns species1 and species2 using the [ operator and the list [\"species1\",\"species2\"]; and eliminate redundant edges in the edge-list using the drop_duplicates method.", "interac_grn = sif_data[sif_data.interaction_type == \"controls-expression-of\"]\ninterac_grn_unique = interac_grn[[\"species1\",\"species2\"]].drop_duplicates()", "Create an undirected graph in igraph, from the dataframe edge-list, using Graph.TupleList and specifying directed=False. Print out the graph summary using the summary instance method.", "grn_igraph = igraph.Graph.TupleList(interac_grn_unique.values.tolist(), directed=False)\ngrn_igraph.summary()", "For one vertex at a time (iterating over the vertex sequence grn_igraph.vs), compute that vertex's harmonic mean closeness centrality using Eq. 7.30 from Newman's book. Don't forget to eliminate the \"0\" distance between a vertex and itself, in the results you get back from calling the shortest_paths method on the Vertex object. Just for information purposes, measure how long the code takes to run, in seconds, using timeit.default_timer().", "N = len(grn_igraph.vs)\n\n# allocate a vector to contain the vertex closeness centralities; initialize to zeroes\n# (so if a vertex is a singleton we don't have to update its closeness centrality)\ncloseness_centralities = numpy.zeros(N)\n\n# initialize a counter\nctr = 0\n\n# start the timer\nstart_time = timeit.default_timer()\n\n# for each vertex in `grn_igraph.vs`\nfor my_vertex in grn_igraph.vs:\n \n # compute the geodesic distance to every other vertex, from my_vertex, using the `shortest_paths` instance method;\n # put it in a numpy.array\n \n # filter the numpy array to include only entries that are nonzero and finite, using `> 0 & numpy.isfinite(...)`\n \n # if there are any distance values that survived the filtering, take their element-wise reciprocals, \n # then compute the sum, then divide by N-1 (following Eq. 7.30 in Newman)\n\n # increment the counter\n\n \n# compute the elapsed time\nci_elapsed = timeit.default_timer() - start_time\nprint(ci_elapsed)", "Histogram the harmonic-mean closeness centralities. Do they have a large dynamic range?", "import matplotlib.pyplot\nmatplotlib.pyplot.hist(closeness_centralities)\nmatplotlib.pyplot.xlabel(\"Ci\")\nmatplotlib.pyplot.ylabel(\"Frequency\")\nmatplotlib.pyplot.show()", "Scatter plot the harmonic-mean closeness centralities vs. the log10 degree. Is there any kind of relationship?", "ax = matplotlib.pyplot.gca()\nax.scatter(grn_igraph.degree(), closeness_centralities)\nax.set_xscale(\"log\")\nmatplotlib.pyplot.xlabel(\"degree\")\nmatplotlib.pyplot.ylabel(\"closeness\")\nmatplotlib.pyplot.show()", "Which protein has the highest harmonic-mean closeness centrality in the network, and what is its centrality value? use numpy.argmax", "print(numpy.max(closeness_centralities))\ngrn_igraph.vs[numpy.argmax(closeness_centralities)][\"name\"]", "Print names of the top 10 proteins in the network, by harmonic-mean closeness centrality:, using numpy.argsort:", "grn_igraph.vs[numpy.argsort(closeness_centralities)[::-1][0:9].tolist()][\"name\"]", "Answer should be:\n['CYP26A1', 'TCF3', 'LEF1', 'MYC', 'MAZ', 'FOXO4', 'MAX', 'PAX4', 'SREBF1']" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
eshlykov/mipt-day-after-day
optimizaion/kaggle/eshlykov-kaggle.ipynb
unlicense
[ "# Подключаем все необходимые библиотеки.\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy as sp\nimport scipy.misc as scm # Для logsumexp и imread.\n%matplotlib inline\n\nimport math\nimport tqdm # Для отображения прогресса.\n\ndef read_test(size=3000):\n # 3000 - потому что не хочу запоминать, сколько всего картинок на самом деле.\n test = np.array([]).reshape(0, 3072) # Размер картинки 32x32, всего 3 цвета, итого 3072.\n for i in tqdm.tqdm(np.arange(size)):\n # Нужен try для случая, если картинки не существует.\n try:\n img = scm.imread('test/test/img_{}.jpg'.format(i)).reshape(1, -1)\n test = np.append(test, img, axis=0)\n except:\n # Ничего не надо делать, переходим к следующей картинке.\n continue\n return test / 127.5 - 1 # Отображаем все числа в отрезок [0; 1].\n\ndef read_train(size=7000):\n # 7000 - потому что не хочу запоминать, сколько всего картинок на самом деле.\n sample = np.array([]).reshape(0, 3072) # Размер картинки 32x32, всего 3 цвета, итого 3072.\n result = np.array([]).reshape(0, 2) # Векторы вида [0, 1] (indoor) либо [1, 0] (outdoor).\n for i in tqdm.tqdm(np.arange(size)):\n # Переберим теперь тип картинки.\n for res, door in enumerate(['outdoor', 'indoor']):\n # Нужен try для случая, если картинки не существует.\n try:\n img = scm.imread('train/train/{}_{}.jpg'.format(door, i)).reshape(1, -1)\n sample = np.append(sample, img, axis=0)\n # Заполняем вектор ответа - зависит лишь от типа картинки.\n result = np.append(result, np.zeros((1, 2)), axis=0)\n result[-1, res] = 1\n except:\n # Ничего не надо делать, переходим к следующей картинке.\n continue\n return sample / 127.5 - 1, result # Отображаем всечисла из выборки в отрезок [0; 1].\n\n# Считываем данные, которые будем исследовать.\ntest = read_test()\n\n# Считываем данные, на которых будем обучаться.\nsample, result = read_train()\n\n# Просто посмотрим на их размер.\nprint(test.shape, sample.shape, result.shape)\n\n# Пример картинок из выборки. Вывод странненький, но это не наша проблема. =)\nplt.figure(figsize=(10, 10))\nfor i in np.arange(10):\n plt.subplot(1, 10, i + 1)\n plt.imshow(np.reshape(sample[i], (32, 32, 3)))", "Будем совсем неразумно обучаться на всем train'е, так как тогда мы переобучимся,\nто есть наш алгоритм \"подгониться\" под закономерности, присущие только train'у,\nа на реальных данных будет неистово лажать. Так что train разделим на две части:\nна 75% будем обучаться, а на 25% проверять, что мы лажаем не неистово.\nЕсли возьмем первые 25% от всего train'а, то может быть несбалансированное число\noutdoor'ов и indoor'ов. Поэтому для train возьмем первые 75% outdoor'ов плюс\nпервые 75% indoor'ов. Тогда мы сохраним пропорции outdoor:indoor таким, какое оно\nво всем train'е. Будет особенно клево, если и в исследуемых данных соблюдается так же\nпропорция.", "# Выделяем outdoor'ы и indoor'ы.\nsample_out = sample[result[:, 0] == 1]\nsample_in = sample[result[:, 1] == 1]\nresult_out = result[result[:, 0] == 1]\nresult_in = result[result[:, 1] == 1]\n\n# Считаем размер indoor- и outdoor-частей в train'е.\ntrain_size_in = int(sample_in.shape[0] * 0.75)\ntrain_size_out = int(sample_out.shape[0] * 0.75)\n\n# Разделяем outdoor'ы и indoor'ы на обучающую и тестовую часть.\nx_train_out, x_test_out = np.split(sample_out, [train_size_out])\ny_train_out, y_test_out = np.split(result_out, [train_size_out])\nx_train_in, x_test_in = np.split(sample_in, [train_size_in])\ny_train_in, y_test_in = np.split(result_in, [train_size_in])\n\n# Делаем общий train и test, смешивая indoor'ы и outdoor'ы.\nx_train = np.vstack([x_train_in, x_train_out])\ny_train = np.vstack([y_train_in, y_train_out])\nx_test = np.vstack([x_test_in, x_test_out])\ny_test = np.vstack([y_test_in, y_test_out])", "Для каждой картинки мы хотим найти вектор $(p_0, p_1)$, вероятностей такой, что $p_i$ - вероятность того, что картинка принадлежит классу $i$ ($0$ — outdoor, $1$ — indoor).\nРеализуя логистическую регрессию, мы хотим приближать вероятности к их настоящему распределению. \nВыражение выдает ответ вида $$ W x + b, $$\nгде $x$ — наш вектор картинки, а результат — числовой вектор размерности $2$ с какими-то числами. Для того, чтобы эти числа стали вероятностями от $0$ до $1$, реализуем функцию \n$$\n\\text{softmax}(W, b, x) = \\frac{e^{Wx+b}}{\\sum(e^{Wx+b})},\n$$\nи полученные значения будут как раз давать в сумме 1, и ими мы будем приближать вероятности. \nОценивать качество нашей модели будем с помощью кросс-энтропии, см. https://en.wikipedia.org/wiki/Cross_entropy.\nСначала поймем, что $x$ - вектор размерности 3072, $W$ - матрица 2 на 3072, $b$ - вектор размерности 2.\nПоложим $x'i = x_i$ для $ i \\leqslant 3072 $ и $x'{3073} = 1$. Получили вектор $x'$ размерности 3073. Положим $W'{i,j} = W{i,j}$ для $ i \\leqslant 2, j \\leqslant 3073$ и $W'_{i,3073}=b_i$ для $ i \\leqslant 2 $.\nТаким образом, к вектору $x$ просто дописали 1, а к матрице $W$ просто приписали вектор $b$ справа.\nЗаметим теперь, что в точности верно равенство: $Wx+b=W'x'$. Теперь забьем на вектор $b$ и будем считать, что у нас есть матрица 10 на 3073, элементы которой надо оценить. Далее везде считаем $W' = W$ и $x' = x$.\nГрадиентный спуск считается по формуле: $W_{k+1} = W_k - \\eta_k \\nabla L(W_k)$, где $\\eta_k$ — шаг, а $L$ — функция $\\text{loss}$. Значит, нам надо посчитать градиент функции $L$, то есь найти ее частные производные по всем 6146 переменным.\nВспомним, как определяется $L$. Обозначим через $y$ вектор вида $(1, 0)$ либо $(0, 1)$, где 1 на $k$-м месте, где $k - 1$ — тип исследуемой картинки. Размерность $y$ равна 2. Сам вектор $y$ олицетворяет ответ для данной картинки.\nТогда\n$$ L(W) = -y_1 \\ln \\frac{e^{(Wx)1}}{e^{(Wx)_1} + e^{(Wx)_2}} -y{2} \\ln \\frac{e^{(Wx){2}}}{e^{(Wx)_1} + e^{(Wx)_2}} + \\frac{\\lambda}{2} \\sum{i=1}^{2} \\sum_{j=1}^{3073} W_{i,j}^2. $$\nПоследняя сумма — так называемый регуляризатор. Если у нас много признаков (у нас их 6146), то при логистической регресии может возникнуть переобучение. Добавляя все параметры в $\\text{loss}$, мы не сможем получить неестественного результата, когда какие-то параметры очень маленькие, а какие-то очень большие, потому что большие будут сильно увеличивать регуляризатор, а функция минимизируется. Таким образом, более вероятно получение подходящего результата.\nЭто описано в курсе Machine Learning by Stanford University во втором уроке третьей недели. Ссылка: https://www.coursera.org/learn/machine-learning/lecture/4BHEy/regularized-logistic-regression.\nТеперь найдем производную по $W_{i,j}$: $$\n\\frac{dL(W)}{dW_{i,j}} =\n-y_1 \\frac{e^{(Wx)1} + e^{(Wx)_2}}{e^{(Wx)_1}} \\cdot\n\\frac{-e^{(Wx)_1} e^{(Wx)_i} x_j}\n{e^{(Wx)_1} + e^{(Wx)_2}}\n-y{2} \\frac{e^{(Wx)1} + e^{(Wx)_2}}{e^{(Wx){2}}} \\cdot\n\\frac{-e^{(Wx){2}} e^{(Wx)_i} x_j}\n{e^{(Wx)_1} + e^{(Wx)_2}} -\\\n- y_i \\frac{e^{(Wx)_1} + e^{(Wx)_2}}{e^{(Wx)_i}} \\cdot\n\\frac{e^{(Wx)_i} x_j (e^{(Wx)_1} + e^{(Wx)_2})}\n{(e^{(Wx)_1} + e^{(Wx)_2})^2}\n+ \\lambda W{i,j}. $$\nУпростим немного: $$\n\\frac{dL(W)}{dW_{i,j}} =\n\\frac{ x_j e^{(Wx)i} (y_1 + y_2) }\n{e^{(Wx)_1} + e^{(Wx)_2}}\n-y_i x_j\n+ \\lambda W{i,j}. $$\nУпрощая еще сильнее, приходим к окончательному ответу: $$\n\\frac{dL(W)}{dW_{i,j}} =\\left( \\frac{e^{(Wx)i}}{e^{(Wx)_1} + e^{(Wx)_2}} - y_i \\right) x_j\n+ \\lambda W{i,j}.\n$$\nСоответственно, если $j = 3073$, то есть дифференцируем по переменным $ W_{1, 3073} = b_1, \\ldots, W_{2, 3073} = b_2$, то коэффициент перед скобкой просто 1.\nПерейдем к реализации.", "def softmax(W, x):\n # Функция logsumexp более стабтильно вычисляет функцию экспонент, почти\n # избавляя нас от проблемы переполнения.\n p = np.dot(x, W.T)\n return np.exp(p - scm.logsumexp(p, axis=1).reshape(-1, 1))\n\ndef loss(y, softmax, W, l):\n # Формула из Википедии по ссылке выше c добавленным регуляризатором.\n return np.mean(-np.sum(y * np.log(softmax), axis=1)) + l * np.trace(W @ W.T) / (2 * y.shape[0])\n\n# Считаем средний по всем картинкам градиент.\n# Градиент у нас будет не вектор, как мы привыкли, а матрица 2x3073.\ndef gradients(W, x, y, l):\n p = softmax(W, x)\n grads = (p - y).T @ x + l * W\n return grads / x.shape[0] # По максимимум матричных вычислений!\n\n# Выбор шага по правилу Армихо из семинарского листочка.\ndef armijo(W, x, y, l, alpha=0.5, beta=0.5):\n s = 1\n grad = gradients(W, x, y, l)\n dW = -grad # Направление спуска.\n loss_1 = loss(y_train, softmax(W + s * dW, x), W, l)\n loss_0 = loss(y_train, softmax(W, x), W, l)\n while loss_1 > loss_0 + alpha * s * (grad * dW).sum():\n s = beta * s\n loss_1 = loss(y_train, softmax(W + s * dW, x), W, l)\n loss_0 = loss(y_train, softmax(W, x), W, l)\n return s\n\ndef classify(x_train, x_test, y_train, y_test, iters, l):\n # Как было замечено выше, W Размера 2 на 3072, а b размера 2, но мы приписываем b к W.\n W = np.zeros((2, 3072))\n b = np.zeros(2)\n\n # Для приписывания запишем b как вектор столбец и воспользуемся функцией hstack.\n b = b.reshape(b.size, 1)\n W = np.hstack([W, b])\n\n # Соответственно, нужно поменять x_train и x_test, добавив по 1 снизу.\n fictious = np.ones((x_train.shape[0], 1))\n x_train = np.hstack([x_train, fictious])\n fictious = np.ones((x_test.shape[0], 1))\n x_test = np.hstack([x_test, fictious])\n\n # Будем записывать потери на каждом шаге спуска.\n losses_train = [loss(y_train, softmax(W, x_train), W, l)]\n losses_test = [loss(y_test, softmax(W, x_test), W, l)]\n\n # Собственно, сам спуск.\n for i in tqdm.tqdm(np.arange(iters)):\n # Именно так - в Армихо подставляется alpha = l, а l = 0!\n # Потому что я накосячил и не заметил! =)\n eta = armijo(W, x_train, y_train, 0, l)\n W = W - eta * gradients(W, x_train, y_train, l)\n losses_train.append(loss(y_train, softmax(W, x_train), W, l))\n losses_test.append(loss(y_test, softmax(W, x_test), W, l))\n\n # На выходе имеется оптимальное значение W и массивы потерь.\n return W, losses_train, losses_test\n\nl = 0.04 # Сработает лучше, чем вообще без регуляризатора (l = 0).\n\n# Нам хватит и 100 итераций, переобучение начинается достаточно быстро.\nW, losses_train, losses_test = classify(x_train, x_test, y_train, y_test, 100, l)\n\nplt.plot(losses_train, color='green', label='train')\nplt.plot(losses_test, color='red', label='test')\nplt.xlabel('Gradient descent iteration')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()\n\niters = np.argmin(losses_test) # На этой итиреации ошибка на тесте минимальна.\n\n# Делаем столько итераций.\nW, losses_train, losses_test = classify(x_train, x_test, y_train, y_test, iters, l)", "Посчитаем среднюю квадратичную ошибку на тесте, чтобы прикинуть, что будет на Kaggle.", "# Добавляем 1 к выборке.\nnx_test = np.hstack([x_test, np.ones(x_test.shape[0]).reshape(x_test.shape[0], 1)])\nprobabilities = softmax(W, nx_test) # Считаем вероятности.\nrecognized = np.argmax(probabilities, axis=1) # Что распознано.\nanswers = np.argmax(y_test, axis=1) # Правильные ответы.\n\nnp.sqrt(np.mean((recognized - answers) ** 2)) # Собственно, ошибка.", "Теперь применяем найденную матрицу к исследумемым данным.", "# Добавляем 1 к выборке.\nntest = np.hstack([test, np.ones(test.shape[0]).reshape(test.shape[0], 1)])\nprobabilities = softmax(W, ntest) # Считаем вероятности.\nress = np.argmax(probabilities, axis=1).reshape(-1, 1) # Что распознано.\n\n# Осталось загнать все в табличку, чтобы ее записать в csv.\nids = np.arange(ress.size).reshape(-1, 1)\nsubmit = np.hstack([ids, ress])\n\n# Заполняем csv-шник.\nimport csv\nwith open('submission.csv', 'w', newline='') as csvfile:\n submission = csv.writer(csvfile, delimiter=',')\n submission.writerow(['id', 'res'])\n submission.writerows(submit)", "Вот и готово." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Jackporter415/phys202-2015-work
assignments/assignment10/ODEsEx03.ipynb
mit
[ "Ordinary Differential Equations Exercise 3\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nfrom scipy.integrate import odeint\nfrom IPython.html.widgets import interact, fixed", "Damped, driven nonlinear pendulum\nThe equations of motion for a simple pendulum of mass $m$, length $l$ are:\n$$\n\\frac{d^2\\theta}{dt^2} = \\frac{-g}{\\ell}\\sin\\theta\n$$\nWhen a damping and periodic driving force are added the resulting system has much richer and interesting dynamics:\n$$\n\\frac{d^2\\theta}{dt^2} = \\frac{-g}{\\ell}\\sin\\theta - a \\omega - b \\sin(\\omega_0 t)\n$$\nIn this equation:\n\n$a$ governs the strength of the damping.\n$b$ governs the strength of the driving force.\n$\\omega_0$ is the angular frequency of the driving force.\n\nWhen $a=0$ and $b=0$, the energy/mass is conserved:\n$$E/m =g\\ell(1-\\cos(\\theta)) + \\frac{1}{2}\\ell^2\\omega^2$$\nBasic setup\nHere are the basic parameters we are going to use for this exercise:", "g = 9.81 # m/s^2\nl = 0.5 # length of pendulum, in meters\ntmax = 50. # seconds\nt = np.linspace(0, tmax, int(100*tmax))", "Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\\vec{y}(t) = (\\theta(t),\\omega(t))$.", "def derivs(y, t, a, b, omega0):\n \"\"\"Compute the derivatives of the damped, driven pendulum.\n \n Parameters\n ----------\n y : ndarray\n The solution vector at the current time t[i]: [theta[i],omega[i]].\n t : float\n The current time t[i].\n a, b, omega0: float\n The parameters in the differential equation.\n \n Returns\n -------\n dy : ndarray\n The vector of derviatives at t[i]: [dtheta[i],domega[i]].\n \"\"\"\n\n theta = y[0]\n omega = y[1]\n answer = []\n for i in range(len(y)-1):\n dy = -g/l*np.sin(theta)-a*omega-b*np.sin(omega0*t)\n return dy\n \n \nderivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0)\n\nassert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])\n\ndef energy(y):\n \"\"\"Compute the energy for the state array y.\n \n The state array y can have two forms:\n \n 1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.\n 2. It could be an ndim=2 array where each row is the [theta,omega] at single\n time.\n \n Parameters\n ----------\n y : ndarray, list, tuple\n A solution vector\n \n Returns\n -------\n E/m : float (ndim=1) or ndarray (ndim=2)\n The energy per mass.\n \"\"\"\n # YOUR CODE HERE\n raise NotImplementedError()\n\nassert np.allclose(energy(np.array([np.pi,0])),g)\nassert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))", "Simple pendulum\nUse the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.\n\nIntegrate the equations of motion.\nPlot $E/m$ versus time.\nPlot $\\theta(t)$ and $\\omega(t)$ versus time.\nTune the atol and rtol arguments of odeint until $E/m$, $\\theta(t)$ and $\\omega(t)$ are constant.\n\nAnytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.", "# YOUR CODE HERE\nraise NotImplementedError()\n\n# YOUR CODE HERE\nraise NotImplementedError()\n\n# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # leave this to grade the two plots and their tuning of atol, rtol.", "Damped pendulum\nWrite a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\\omega_0]$.\n\nUse the initial conditions $\\theta(0)=-\\pi + 0.1$ and $\\omega=0$.\nDecrease your atol and rtol even futher and make sure your solutions have converged.\nMake a parametric plot of $[\\theta(t),\\omega(t)]$ versus time.\nUse the plot limits $\\theta \\in [-2 \\pi,2 \\pi]$ and $\\theta \\in [-10,10]$\nLabel your axes and customize your plot to make it beautiful and effective.", "def plot_pendulum(a=0.0, b=0.0, omega0=0.0):\n \"\"\"Integrate the damped, driven pendulum and make a phase plot of the solution.\"\"\"\n # YOUR CODE HERE\n raise NotImplementedError()", "Here is an example of the output of your plot_pendulum function that should show a decaying spiral.", "plot_pendulum(0.5, 0.0, 0.0)", "Use interact to explore the plot_pendulum function with:\n\na: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$.\nb: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.\nomega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.", "# YOUR CODE HERE\nraise NotImplementedError()", "Use your interactive plot to explore the behavior of the damped, driven pendulum by varying the values of $a$, $b$ and $\\omega_0$.\n\nFirst start by increasing $a$ with $b=0$ and $\\omega_0=0$.\nThen fix $a$ at a non-zero value and start to increase $b$ and $\\omega_0$.\n\nDescribe the different classes of behaviors you observe below.\nYOUR ANSWER HERE" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BradHub/SL-SPH
BEM_problem.ipynb
mit
[ "import math\nimport numpy\nfrom matplotlib import pyplot", "BEM method", "#Q = 2000/3 #strength of the source-sheet,stb/d\nh=25.26 #thickness of local gridblock,ft\nphi=0.2 #porosity \nkx=200 #pemerability in x direction,md\nky=200 #pemerability in y direction,md\nkr=kx/ky #pemerability ratio\nmiu=1 #viscosity,cp\n\nNw=1 #Number of well\nQwell_1=2000 #Flow rate of well 1\nBoundary_V=-400 #boundary velocity ft/day", "Boundary Discretization\nwe will create a discretization of the body geometry into panels (line segments in 2D). A panel's attributes are: its starting point, end point and mid-point, its length and its orientation. See the following figure for the nomenclature used in the code and equations below.\n<img src=\"./resources/PanelLocal.png\" width=\"300\">\n<center>Figure 1. Nomenclature of the boundary element in the local coordinates</center>\nCreate panel and well class", "class Panel:\n \"\"\"Contains information related to a panel.\"\"\"\n def __init__(self, xa, ya, xb, yb):\n \"\"\"Creates a panel.\n \n Arguments\n ---------\n xa, ya -- Cartesian coordinates of the first end-point.\n xb, yb -- Cartesian coordinates of the second end-point.\n \"\"\"\n self.xa, self.ya = xa, ya\n self.xb, self.yb = xb, yb\n \n self.xc, self.yc = (xa+xb)/2, (ya+yb)/2 # control-point (center-point)\n self.length = math.sqrt((xb-xa)**2+(yb-ya)**2) # length of the panel\n \n \n # orientation of the panel (angle between x-axis and panel)\n self.sinalpha=(yb-ya)/self.length\n self.cosalpha=(xb-xa)/self.length\n \n self.Q = 0. # source strength\n self.U = 0. # velocity component\n self.V = 0. # velocity component\n self.P = 0. # pressure coefficient\n\nclass Well:\n \"\"\"Contains information related to a panel.\"\"\"\n def __init__(self, xw, yw,rw,Q):\n \"\"\"Creates a panel.\n \n Arguments\n ---------\n xw, yw -- Cartesian coordinates of well source.\n Q -- Flow rate of well source.\n rw -- radius of well source.\n \"\"\"\n self.xw, self.yw = xw, yw\n \n self.Q = Q # source strength\n self.rw = rw # velocity component\n ", "We create a node distribution on the boundary that is refined near the corner with cosspace function", "def cosspace(st,ed,N):\n N=N+1\n AngleInc=numpy.pi/(N-1)\n CurAngle = AngleInc\n space=numpy.linspace(0,1,N)\n space[0]=st\n for i in range(N-1):\n space[i+1] = 0.5*numpy.abs(ed-st)*(1 - math.cos(CurAngle));\n CurAngle += AngleInc\n if ed<st:\n space[0]=ed\n space=space[::-1]\n return space", "Discretize boundary element along the boundary\nHere we implement BEM in a squre grid", "N=80 #Number of boundary element\nNbd=20 #Number of boundary element in each boundary\nDx=1. #Grid block length in X direction\nDy=1. #Gird block lenght in Y direction\n\n#Create the array\nx_ends = numpy.linspace(0, Dx, N) # computes a 1D-array for x\ny_ends = numpy.linspace(0, Dy, N) # computes a 1D-array for y\ninterval=cosspace(0,Dx,Nbd)\nrinterval=cosspace(Dx,0,Nbd)\n#interval=numpy.linspace(0,1,Nbd+1)\n#rinterval=numpy.linspace(1,0,Nbd+1)\n\n#Define the rectangle boundary\n\n\nfor i in range(Nbd):\n x_ends[i]=0\n y_ends[i]=interval[i]\n\nfor i in range(Nbd):\n x_ends[i+Nbd]=interval[i]\n y_ends[i+Nbd]=Dy\n \nfor i in range(Nbd):\n x_ends[i+Nbd*2]=Dx\n y_ends[i+Nbd*2]=rinterval[i]\n \nfor i in range(Nbd):\n x_ends[i+Nbd*3]=rinterval[i]\n y_ends[i+Nbd*3]=0\n \nx_ends,y_ends=numpy.append(x_ends, x_ends[0]), numpy.append(y_ends, y_ends[0])\n\n#Define the panel\npanels = numpy.empty(N, dtype=object)\nfor i in range(N):\n panels[i] = Panel(x_ends[i], y_ends[i], x_ends[i+1], y_ends[i+1])\n \n \n#Define the well\nwells = numpy.empty(Nw, dtype=object)\n\nwells[0]=Well(Dx/2,Dy/2,0.025,Qwell_1)\n\n#for i in range(N):\n # print(\"Panel Coordinate (%s,%s) sina,cosa (%s,%s) \" % (panels[i].xc,panels[i].yc,panels[i].sinalpha,panels[i].cosalpha))\n#print(\"Well Location (%s,%s) radius: %s Flow rate:%s \" % (wells[0].xw,wells[0].yw,wells[0].rw,wells[0].Q))", "Plot boundary elements and wells", "#Plot the panel\n%matplotlib inline\n\nval_x, val_y = 0.3, 0.3\nx_min, x_max = min(panel.xa for panel in panels), max(panel.xa for panel in panels)\ny_min, y_max = min(panel.ya for panel in panels), max(panel.ya for panel in panels)\nx_start, x_end = x_min-val_x*(x_max-x_min), x_max+val_x*(x_max-x_min)\ny_start, y_end = y_min-val_y*(y_max-y_min), y_max+val_y*(y_max-y_min)\n\nsize = 5\npyplot.figure(figsize=(size, (y_end-y_start)/(x_end-x_start)*size))\npyplot.grid(True)\npyplot.xlabel('x', fontsize=16)\npyplot.ylabel('y', fontsize=16)\npyplot.xlim(x_start, x_end)\npyplot.ylim(y_start, y_end)\n\npyplot.plot(numpy.append([panel.xa for panel in panels], panels[0].xa), \n numpy.append([panel.ya for panel in panels], panels[0].ya), \n linestyle='-', linewidth=1, marker='o', markersize=6, color='#CD2305');\npyplot.scatter(wells[0].xw,wells[0].yw,s=100,alpha=0.5)\n\npyplot.legend(['panels', 'Wells'], \n loc=1, prop={'size':12})", "Boundary element implementation\n<img src=\"./resources/BEMscheme2.png\" width=\"400\">\n<center>Figure 2. Representation of a local gridblock with boundary elements</center>\nGenerally, the influence of all the j panels on the i BE node can be expressed as follows:\n\\begin{matrix}\n{{c}{ij}}{{p}{i}}+{{p}{i}}\\int{{{s}{j}}}{{{H}{ij}}d{{s}{j}}}=({{v}{i}}\\cdot \\mathbf{n})\\int_{{{s}{j}}}{{{G}{ij}}}d{{s}_{j}}\n\\end{matrix}\nWhere,\n${{c}_{ij}}$ is the free term, cased by source position.\n<center>${{c}_{ij}}=\\left{ \\begin{matrix}\n \\begin{matrix}\n 1 & \\text{source j on the internal domain} \\\n\\end{matrix} \\\n \\begin{matrix}\n 0.5 & \\text{source j on the boundary} \\\n\\end{matrix} \\\n \\begin{matrix}\n 0 & \\text{source j on the external domain} \\\n\\end{matrix} \\\n\\end{matrix} \\right.$</center>\n$\\int_{{{s}{j}}}{{{H}{ij}}d{{s}_{j}}\\text{ }}$ is the integrated effect of the boundary element source i on the resulting normal flux at BE node j. \n$\\int_{{{s}{j}}}{{{G}{ij}}}d{{s}_{j}}$ is the is the integrated effect of the boundary element source i on the resulting pressure at BE node j\nLine segment source solution for pressure and velocity (Derived recently)\nThe integrated effect can be formulated using line segment source solution, which givs:\n\\begin{equation}\n\\int_{{{s}{j}}}{{{G}{ij}}}d{{s}{j}}=B{{Q}{w}}=P({{{x}'}{i}},{{{y}'}{i}})=-\\frac{70.60\\mu }{h\\sqrt{{{k}{x}}{{k}{y}}}}\\int_{t=0}^{t={{l}{j}}}{\\ln \\left{ {{({x}'-t\\cos {{\\alpha }{j}})}^{2}}+\\frac{{{k}{x}}}{{{k}{y}}}{{({y}'-t\\sin {{\\alpha }{j}})}^{2}} \\right}dt}\\cdot {{Q}{w}}\n\\end{equation}\n\\begin{equation}\n\\int_{{{s}{j}}}{{{H}{ij}}d{{s}{j}}\\text{ }}={{v}{i}}(s)\\cdot {{\\mathbf{n}}{i}}=-{{u}{i}}\\sin {{\\alpha }{i}}+{{v}{i}}\\cos {{\\alpha }_{i}}\n\\end{equation}\nWhere,\n\\begin{equation}\nu\\left( {{{{x}'}}{i}},{{{{y}'}}{i}} \\right)={{A}{u}}{{Q}{j}}=\\frac{0.8936}{h\\phi }\\sqrt{\\frac{{{k}{x}}}{{{k}{y}}}}\\int_{t=0}^{t={{l}{j}}}{\\frac{{{{{x}'}}{i}}-t\\cos {{\\alpha }{j}}}{{{\\left( {{{{x}'}}{i}}-t\\cos {{\\alpha }{j}} \\right)}^{2}}+\\frac{{{k}{x}}}{{{k}{y}}}{{({{{{y}'}}{i}}-t\\sin {{\\alpha }{j}})}^{2}}}dt}\\cdot {{Q}{j}}\n\\end{equation}\n\\begin{equation}\nv\\left( {{{{x}'}}{i}},{{{{y}'}}{i}} \\right)={{A}{v}}{{Q}{j}}=\\frac{0.8936}{h\\phi }\\sqrt{\\frac{{{k}{x}}}{{{k}{y}}}}\\int_{t=0}^{t={{l}{j}}}{\\frac{{{{{y}'}}{i}}-t\\sin {{\\alpha }{j}}}{{{\\left( {{{{x}'}}{i}}-t\\cos {{\\alpha }{j}} \\right)}^{2}}+\\frac{{{k}{x}}}{{{k}{y}}}{{({{{{y}'}}{i}}-t\\sin {{\\alpha }{j}})}^{2}}}dt}\\cdot {{Q}{j}}\n\\end{equation}\nLine segment source Integration function (Bij and Aij)", "#Panel infuence factor Bij\ndef InflueceP(x, y, panel):\n \"\"\"Evaluates the contribution of a panel at one point.\n \n Arguments\n ---------\n x, y -- Cartesian coordinates of the point.\n panel -- panel which contribution is evaluated.\n \n Returns\n -------\n Integral over the panel of the influence at one point.\n \"\"\"\n#Transfer global coordinate point(x,y) to local coordinate\n x=x-panel.xa\n y=y-panel.ya\n L1=panel.length\n \n#Calculate the pressure and velocity influence factor\n a=panel.cosalpha**2+kr*panel.sinalpha**2\n b=x*panel.cosalpha+kr*panel.sinalpha*y\n c=y*panel.cosalpha-x*panel.sinalpha\n dp=70.6*miu/h/math.sqrt(kx*ky)\n Cp = dp/a*(\n (\n b*math.log(x**2-2*b*L1+a*L1**2+kr*y**2)\n -L1*a*math.log((x-L1*panel.cosalpha)**2+kr*(y-L1*panel.sinalpha)**2)\n +2*math.sqrt(kr)*c*math.atan((b-a*L1)/math.sqrt(kr)/c)\n )\n -\n (\n b*math.log(x**2+kr*y**2)\n +2*math.sqrt(kr)*c*math.atan((b)/math.sqrt(kr)/c)\n ) \n )\n #debug\n #print(\"a: %s b:%s c:%s \" % (a,b,c))\n #angle=math.atan((b-a*L1)/math.sqrt(kr)/c)*180/numpy.pi\n #print(\"Magic angle:%s\"% angle)\n return Cp\n\ndef InflueceU(x, y, panel):\n \"\"\"Evaluates the contribution of a panel at one point.\n \n Arguments\n ---------\n x, y -- Cartesian coordinates of the point.\n panel -- panel which contribution is evaluated.\n \n Returns\n -------\n Integral over the panel of the influence at one point.\n \"\"\"\n#Transfer global coordinate point(x,y) to local coordinate\n x=x-panel.xa\n y=y-panel.ya\n L1=panel.length\n\n#Calculate the pressure and velocity influence factor\n a=panel.cosalpha**2+kr*panel.sinalpha**2\n b=x*panel.cosalpha+kr*panel.sinalpha*y\n c=y*panel.cosalpha-x*panel.sinalpha\n dv=-0.4468/h/phi*math.sqrt(kx/ky)\n Cu = dv/a*(\n ( \n panel.cosalpha*math.log(x**2-2*b*L1+a*L1**2+kr*y**2)+ 2*math.sqrt(kr)*panel.sinalpha*math.atan((a*L1-b)/math.sqrt(kr)/c) \n )\n -\n (\n panel.cosalpha*math.log(x**2+kr*y**2)+2*math.sqrt(kr)*panel.sinalpha*math.atan((-b)/math.sqrt(kr)/c)\n ) \n ) \n #print(\"a: %s b:%s c:%s \" % (a,b,c))\n #angle=math.atan((b-a*L1)/math.sqrt(kr)/c)*180/numpy.pi\n #print(\"Magic angle:%s\"% angle)\n return Cu\n\ndef InflueceV(x, y, panel):\n \"\"\"Evaluates the contribution of a panel at one point.\n \n Arguments\n ---------\n x, y -- Cartesian coordinates of the point.\n panel -- panel which contribution is evaluated.\n \n Returns\n -------\n Integral over the panel of the influence at one point.\n \"\"\"\n#Transfer global coordinate point(x,y) to local coordinate\n x=x-panel.xa\n y=y-panel.ya\n L1=panel.length\n\n#Calculate the pressure and velocity influence factor\n a=panel.cosalpha**2+kr*panel.sinalpha**2\n b=x*panel.cosalpha+kr*panel.sinalpha*y\n c=y*panel.cosalpha-x*panel.sinalpha\n dv=-0.4468/h/phi*math.sqrt(kx/ky)\n Cv = dv/a*(\n ( \n panel.sinalpha*math.log(x**2-2*b*L1+a*L1**2+kr*y**2)+ 2*math.sqrt(1/kr)*panel.cosalpha*math.atan((b-a*L1)/math.sqrt(kr)/c) \n )\n -\n (\n panel.sinalpha*math.log(x**2+kr*y**2)+2*math.sqrt(1/kr)*panel.cosalpha*math.atan((b)/math.sqrt(kr)/c)\n ) \n ) \n #print(\"a: %s b:%s c:%s \" % (a,b,c))\n #angle=math.atan((b-a*L1)/math.sqrt(kr)/c)*180/numpy.pi\n #print(\"Magic angle:%s\"% angle)\n\n return Cv", "Well source function\nLine source solution for pressure and velocity (Datta-Gupta, 2007)\n\\begin{equation}\nP(x,y)=B{{Q}{w}}=-\\frac{70.60\\mu }{h\\sqrt{{{k}{x}}{{k}{y}}}}\\ln \\left{ {{(x-{{x}{w}})}^{2}}+\\frac{{{k}{x}}}{{{k}{y}}}{{(y-{{y}{w}})}^{2}} \\right}{{Q}{w}}+{{P}_{avg}}\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial P}{\\partial x}=u=\\frac{0.8936}{h\\phi }\\sqrt{\\frac{{{k}{x}}}{{{k}{y}}}}\\sum\\limits_{k=1}^{{{N}{w}}}{{{Q}{k}}}\\frac{x-{{x}{k}}}{{{\\left( x-{{x}{k}} \\right)}^{2}}+\\frac{{{k}{x}}}{{{k}{y}}}{{(y-{{y}_{k}})}^{2}}}\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial P}{\\partial y}=v=\\frac{0.8936}{h\\phi }\\sqrt{\\frac{{{k}{x}}}{{{k}{y}}}}\\sum\\limits_{k=1}^{{{N}{w}}}{{{Q}{k}}}\\frac{y-{{y}{k}}}{{{\\left( x-{{x}{k}} \\right)}^{2}}+\\frac{{{k}{x}}}{{{k}{y}}}{{(y-{{y}_{k}})}^{2}}}\n\\end{equation}", "#Well influence factor\ndef InflueceP_W(x, y, well):\n \"\"\"Evaluates the contribution of a panel at one point.\n \n Arguments\n ---------\n x, y -- Cartesian coordinates of the point.\n panel -- panel which contribution is evaluated.\n \n Returns\n -------\n Integral over the panel of the influence at one point.\n \"\"\"\n dp=-70.6*miu/h/math.sqrt(kx*ky)\n Cp=dp*math.log((x-well.xw)**2+kr*(y-well.yw)**2)\n return Cp\n\ndef InflueceU_W(x, y, well):\n \"\"\"Evaluates the contribution of a panel at one point.\n \n Arguments\n ---------\n x, y -- Cartesian coordinates of the point.\n panel -- panel which contribution is evaluated.\n \n Returns\n -------\n Integral over the panel of the influence at one point.\n \"\"\"\n dv=0.8936/h/phi*math.sqrt(kx/ky)\n Cu=dv*(x-well.xw)/((x-well.xw)**2+kr*(y-well.yw)**2)\n return Cu\n\ndef InflueceV_W(x, y, well):\n \"\"\"Evaluates the contribution of a panel at one point.\n \n Arguments\n ---------\n x, y -- Cartesian coordinates of the point.\n panel -- panel which contribution is evaluated.\n \n Returns\n -------\n Integral over the panel of the influence at one point.\n \"\"\"\n dv=0.8936/h/phi*math.sqrt(kx/ky)\n Cv=dv*(y-well.yw)/((x-well.xw)**2+kr*(y-well.yw)**2)\n return Cv\n\n#InflueceV(0.5,1,panels[3])\n#InflueceP(0,0.5,panels[0])\n#InflueceU(0,0.5,panels[0])", "BEM function solution\nGenerally, the influence of all the j panels on the i BE node can be expressed as follows:\n\\begin{matrix}\n{{c}{ij}}{{p}{i}}+{{p}{i}}\\int{{{s}{j}}}{{{H}{ij}}d{{s}{j}}}=({{v}{i}}\\cdot \\mathbf{n})\\int_{{{s}{j}}}{{{G}{ij}}}d{{s}_{j}}\n\\end{matrix}\nApplying boundary condition along the boundary on above equation, a linear systsem can be constructed as follows:\n\\begin{matrix}\n\\left[ {{{{H}'}}{ij}} \\right]\\left[ {{P}{i}} \\right]=\\left[ {{G}{ij}} \\right]\\left[ {{v}{i}}\\cdot \\mathbf{n} \\right]\n\\end{matrix}\n!!!!!MY IMPLEMENTATION MAY HAS SOME PROBLEM HERE!!!!!!\nAll the integration solution can be evaluated except on itself. Where,\n<center>$\n\\left[ {{{{H}'}}{ij}} \\right]=\\left{ \\begin{matrix}\n \\begin{matrix}\n {{H}{ij}} & i\\ne j \\\n\\end{matrix} \\\n \\begin{matrix}\n {{H}_{ij}}+\\frac{1}{2} & i=j \\\n\\end{matrix} \\\n\\end{matrix} \\right.\n$</center>\n<img src=\"./resources/BEMscheme.png\" width=\"400\">\n<center>Figure 3. Representation of coordinate systems and the principle of superstition with well source and boundary element source </center>\nAs shown in Fig.3, the pressure and velocity at any point i in the local gridblock can be determined using Eqs. below. Applying principle of superposition for each BE node along the boundary (Fig. 3), boundary condition can be written as follows:\n\\begin{matrix}\n {{P}{i}}(s)=\\sum\\limits{j=1}^{M}{{{B}{ij}}{{Q}{j}}} & \\text{constant pressure boundary} \\\n\\end{matrix}\n\\begin{matrix}\n {{v}{i}}(s)\\cdot {{\\mathbf{n}}{i}}=\\sum\\limits_{j=1}^{M}{{{A}{ij}}{{Q}{j}}} & \\text{constant flux boundary} \\\n\\end{matrix}\nThe Pi and v ·n are the konwn boundary codition. The flow rate(strength) of boundary elements in Hij and Gij are the only unknown terms. \nSo we could rearrange the matrix above as linear system:\n<center>$\n{{\\left[ \\begin{matrix}\n {{A}{ij}} \\\n {{B}{ij}} \\\n\\end{matrix} \\right]}{N\\times N}}{{\\left[ \\begin{matrix}\n {{Q}{j}} \\\n {{Q}{j}} \\\n\\end{matrix} \\right]}{N\\times 1}}={{\\left[ \\begin{matrix}\n -{{u}{i}}\\sin {{\\alpha }{i}}+{{v}{i}}\\cos {{\\alpha }{i}} \\\n {{P}{i}} \\\n\\end{matrix} \\right]}{N\\times 1}}\n$</center>", "def build_matrix(panels):\n \"\"\"Builds the source matrix.\n \n Arguments\n ---------\n panels -- array of panels.\n \n Returns\n -------\n A -- NxN matrix (N is the number of panels).\n \"\"\"\n N = len(panels)\n A = numpy.empty((N, N), dtype=float)\n #numpy.fill_diagonal(A, 0.5)\n \n for i, p_i in enumerate(panels): #target nodes\n for j, p_j in enumerate(panels): #BE source\n #if i != j: ###Matrix construction\n if i>=0 and i<Nbd or i>=3*Nbd and i<4*Nbd: \n A[i,j] = -p_j.sinalpha*InflueceU(p_i.xc, p_i.yc, p_j)+p_j.cosalpha*InflueceV(p_i.xc, p_i.yc, p_j)\n #A[i,j] = InflueceP(p_i.xc, p_i.yc, p_j)\n if i>=Nbd and i<2*Nbd or i>=2*Nbd and i<3*Nbd: \n A[i,j] = -p_j.sinalpha*InflueceU(p_i.xc, p_i.yc, p_j)+p_j.cosalpha*InflueceV(p_i.xc, p_i.yc, p_j)\n #A[i,j] = InflueceP(p_i.xc, p_i.yc, p_j)\n\n return A\n\ndef build_rhs(panels):\n \"\"\"Builds the RHS of the linear system.\n \n Arguments\n ---------\n panels -- array of panels.\n \n Returns\n -------\n b -- 1D array ((N+1)x1, N is the number of panels).\n \"\"\"\n b = numpy.empty(len(panels), dtype=float)\n \n \n for i, panel in enumerate(panels):\n V_well=( -panel.sinalpha*Qwell_1*InflueceU_W(panel.xc, panel.yc, wells[0])+panel.cosalpha*Qwell_1*InflueceV_W(panel.xc, panel.yc, wells[0]) )\n if i>=0 and i<Nbd: \n b[i]=0+V_well\n #b[i]=4000\n #b[i]=84\n if i>=Nbd and i<2*Nbd:\n b[i]=-V_well\n #b[i]=-42\n if i>=2*Nbd and i<3*Nbd: \n b[i]=-V_well\n #b[i]=-42\n if i>=3*Nbd and i<4*Nbd:\n b[i]=0+V_well\n #b[i]=84\n return b\n\n#Qwell_1=300 #Flow rate of well 1\n#Boundary_V=-227 #boundary velocity ft/day\n\nA = build_matrix(panels) # computes the singularity matrix\nb = build_rhs(panels) # computes the freestream RHS\n\n# solves the linear system\nQ = numpy.linalg.solve(A, b)\n\nfor i, panel in enumerate(panels):\n panel.Q = Q[i]", "Plot results", "#Visulize the pressure and velocity field\n\n#Define meshgrid\nNx, Ny = 50, 50 # number of points in the x and y directions\nx_start, x_end = -0.01, 1.01 # x-direction boundaries\ny_start, y_end = -0.01, 1.01 # y-direction boundaries\nx = numpy.linspace(x_start, x_end, Nx) # computes a 1D-array for x\ny = numpy.linspace(y_start, y_end, Ny) # computes a 1D-array for y\nX, Y = numpy.meshgrid(x, y) # generates a mesh grid\n\n#Calculate the velocity and pressure field\np = numpy.empty((Nx, Ny), dtype=float)\nu = numpy.empty((Nx, Ny), dtype=float)\nv = numpy.empty((Nx, Ny), dtype=float)\n\n#for i, panel in enumerate(panels):\n #panel.Q = 0.\n\n#panels[0].Q=100\n#panels[5].Q=100\n#Qwell_1=400\n\n\nfor i in range(Nx):\n for j in range(Ny):\n p[i,j] =sum([p.Q*InflueceP(X[i,j], Y[i,j], p) for p in panels])+Qwell_1*InflueceP_W(X[i,j], Y[i,j], wells[0])\n u[i,j] =sum([p.Q*InflueceU(X[i,j], Y[i,j], p) for p in panels])+Qwell_1*InflueceU_W(X[i,j], Y[i,j], wells[0])\n v[i,j] =sum([p.Q*InflueceV(X[i,j], Y[i,j], p) for p in panels])+Qwell_1*InflueceV_W(X[i,j], Y[i,j], wells[0])\n #p[i,j] =sum([p.Q*InflueceP(X[i,j], Y[i,j], p) for p in panels])\n #u[i,j] =sum([p.Q*InflueceU(X[i,j], Y[i,j], p) for p in panels])\n #v[i,j] =sum([p.Q*InflueceV(X[i,j], Y[i,j], p) for p in panels])\n #p[i,j] =Qwell_1*InflueceP_W(X[i,j], Y[i,j], wells[0])\n #u[i,j] =Qwell_1*InflueceU_W(X[i,j], Y[i,j], wells[0])\n #v[i,j] =Qwell_1*InflueceV_W(X[i,j], Y[i,j], wells[0])\n\n# plots the streamlines\n%matplotlib inline\n\nsize = 6\npyplot.figure(figsize=(size, size))\npyplot.grid(True)\npyplot.title('Streamline field')\npyplot.xlabel('x', fontsize=16)\npyplot.ylabel('y', fontsize=16)\npyplot.xlim(-0.2, 1.2)\npyplot.ylim(-0.2, 1.2)\n\n\npyplot.plot(numpy.append([panel.xa for panel in panels], panels[0].xa), \n numpy.append([panel.ya for panel in panels], panels[0].ya), \n linestyle='-', linewidth=1, marker='o', markersize=6, color='#CD2305');\nstream =pyplot.streamplot(X, Y, u, v,density=2, linewidth=1, arrowsize=1, arrowstyle='->') #streamline\n#cbar=pyplot.colorbar(orientation='vertical')\n\n#equipotential=pyplot.contourf(X, Y, p1, extend='both')\n\nsize = 7\npyplot.figure(figsize=(size, size-1))\npyplot.title('Pressure field')\npyplot.xlabel('x', fontsize=16)\npyplot.ylabel('y', fontsize=16)\npyplot.xlim(0, 1)\npyplot.ylim(0, 1)\n\npyplot.contour(X, Y, p, 15, linewidths=0.5, colors='k')\npyplot.contourf(X, Y, p, 15, cmap='rainbow',\n vmax=abs(p).max(), vmin=-abs(p).max())\npyplot.colorbar() # draw colorbar\n\nsize = 7\npyplot.figure(figsize=(size, size-1))\npyplot.title('Total Velocity field')\npyplot.xlabel('x', fontsize=16)\npyplot.ylabel('y', fontsize=16)\npyplot.xlim(0, 1)\npyplot.ylim(0, 1)\n\nVtotal= numpy.sqrt(u**2+v**2)\n#Vtotal= numpy.abs(v)\npyplot.contour(X, Y, Vtotal, 15, linewidths=0.5, colors='k')\npyplot.contourf(X, Y, Vtotal, 15, cmap='rainbow')\n #vmax=50, vmin=0)\npyplot.colorbar() # draw colorbar\n\npyplot.title('Darcy velocity on the outflow boundary, x component (ft/day)')\npyplot.xlabel('x', fontsize=16)\npyplot.ylabel('y', fontsize=16)\n\npyplot.plot(y, u[49,:], '--', linewidth=2)\npyplot.plot(9.8425+y, u[:,49], '--', linewidth=2)\nu[:,49]\n\npyplot.title('Darcy velocity on the outflow boundary, y component (ft/day)')\n\npyplot.plot(y, v[:,49], '--', linewidth=2)\npyplot.plot(9.8425+y, v[49,:], '--', linewidth=2)\nv[49,:]" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Kappa-Dev/ReGraph
examples/Tutorial_graph_audit.ipynb
mit
[ "Audit trails for graph objects in ReGraph (aka versioning)\nReGraph implements a framework for the version control (VC) of graph transformations\nThe data structure VersionedGraph allows to store the history of transformations of a graph object and perform the following VC operations:\n\nRewrite: perform a rewriting of the object with a commit to the revision history\nBranch: create a new branch (with a diverged version of the graph object)\nMerge branches: merge branches\nRollback: rollback to a point in the history of transformations", "from regraph import NXGraph\nfrom regraph.audit import VersionedGraph\nfrom regraph.rules import Rule\nfrom regraph import print_graph, plot_rule, plot_graph", "Create a graph and pass it to the VersionedGraph wrapper that will take care of the version control.", "graph_obj = NXGraph()\ng = VersionedGraph(graph_obj)", "Now let's create a rule that adds to the graph two nodes connected with an edge and apply it. If we want the changes to be commited to the version control we rewrite through the rewrite method of a VersioneGraph object.", "rule = Rule.from_transform(NXGraph())\nrule.inject_add_node(\"a\")\nrule.inject_add_node(\"b\")\nrule.inject_add_edge(\"a\", \"b\")\n\nrhs_instance, _ = g.rewrite(rule, {}, message=\"Add a -> b\")\nplot_graph(g.graph)", "We create a new branch called \"branch\"", "branch_commit = g.branch(\"branch\")\n\nprint(\"Branches: \", g.branches())\nprint(\"Current branch '{}'\".format(g.current_branch()))", "Apply a rule that clones the node 'b' to the current vesion of the graph (branch 'branch')", "pattern = NXGraph()\npattern.add_node(\"b\")\nrule = Rule.from_transform(pattern)\nrule.inject_clone_node(\"b\")\nplot_rule(rule)\n\nrhs_instance, commit_id = g.rewrite(rule, {\"b\": rhs_instance[\"b\"]}, message=\"Clone b\")\nplot_graph(g.graph)", "The rewrite method of VersionedGraph returns the RHS instance of the applied and the id of the newly created commit corresponding to this rewrite.", "print(\"RHS instance\", rhs_instance)\nprint(\"Commit ID: \", commit_id)", "Switch back to the 'master' branch", "g.switch_branch(\"master\")\nprint(g.current_branch())", "Apply a rule that adds a loop form 'a' to itself, a new node 'c' and connects it with 'a'", "pattern = NXGraph()\npattern.add_node(\"a\")\nrule = Rule.from_transform(pattern)\nrule.inject_add_node(\"c\")\nrule.inject_add_edge(\"c\", \"a\")\nrule.inject_add_edge(\"a\", \"a\")\n\nrhs_instance, _ = g.rewrite(rule, {\"a\": \"a\"}, message=\"Add c and c->a\")\nplot_graph(g.graph)", "Create a new branch 'dev'", "g.branch(\"dev\")", "In this branch remove an edge from 'c' to 'a' and merge two nodes together", "pattern = NXGraph()\npattern.add_node(\"c\")\npattern.add_node(\"a\")\npattern.add_edge(\"c\", \"a\")\nrule = Rule.from_transform(pattern)\nrule.inject_remove_edge(\"c\", \"a\")\nrule.inject_merge_nodes([\"c\", \"a\"])\nplot_rule(rule)\n\ng.rewrite(rule, {\"a\": rhs_instance[\"a\"], \"c\": rhs_instance[\"c\"]}, message=\"Merge c and a\")\nplot_graph(g.graph)", "Switch back to the 'master' branch.", "g.switch_branch(\"master\")", "Apply a rule that clones a node 'a'", "pattern = NXGraph()\npattern.add_node(\"a\")\nrule = Rule.from_transform(pattern)\n_, rhs_clone = rule.inject_clone_node(\"a\")\nrhs_instance, rollback_commit = g.rewrite(rule, {\"a\": rhs_instance[\"a\"]}, message=\"Clone a\")\nplot_graph(g.graph)", "Create a new branch 'test'", "g.branch(\"test\")", "In this branch apply the rule that adds a new node 'd' and connects it with an edge to one of the cloned 'a' nodes", "pattern = NXGraph()\npattern.add_node(\"a\")\nrule = Rule.from_transform(pattern)\nrule.inject_add_node(\"d\")\nrule.inject_add_edge(\"a\", \"d\")\ng.rewrite(rule, {\"a\": rhs_instance[rhs_clone]}, message=\"Add d -> clone of a\")\nplot_graph(g.graph)", "Switch back to 'master'", "g.switch_branch(\"master\")", "Remove a node 'a'", "pattern = NXGraph()\npattern.add_node(\"a\")\nrule = Rule.from_transform(pattern)\nrule.inject_remove_node(\"a\")\nrhs_instance, _ = g.rewrite(rule, {\"a\": rhs_instance[\"a\"]}, message=\"Remove a\")\nplot_graph(g.graph)", "Merge the branch 'dev' into 'master'", "g.merge_with(\"dev\")\n\nplot_graph(g.graph)", "Merge 'test' into 'master'", "g.merge_with(\"test\")\n\nplot_graph(g.graph)", "We can inspect the version control object in more details and look at its attribute _revision_graph, whose nodes represent the commits and whose edges represent graph deltas between different commits (basically, rewriting rules that constitute commits). Here we can see that on the nodes of the revision graph are stored branch names to which commits belong and user specified commit messages.", "for n, attrs in g._revision_graph.nodes(data=True):\n print(\"Node ID: \", n)\n print(\"Attributes: \")\n print(\"\\t\", attrs)\n\n# Pretty-print the history\ng.print_history()", "Now we can rollback to some previous commit (commit where we first cloned the node 'a')", "g.rollback(rollback_commit)\n\nprint(\"Branches: \", g.branches())\nprint(\"Current branch '{}'\".format(g.current_branch()))\nprint(\"Updated revision graph:\")\ng.print_history()\nprint(\"Current graph object\")\nplot_graph(g.graph)\nprint_graph(g.graph)\n\ng.switch_branch(\"branch\")\n\ng.rollback(branch_commit)\n\ng.print_history()\n\nprint(g._heads)\nplot_graph(g.graph)\n\ng.switch_branch(\"master\")\n\nplot_graph(g.graph)\n\ng.merge_with(\"branch\")\n\nplot_graph(g.graph)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aerijman/Transcriptional-Activation-Domains
Disorder.ipynb
mit
[ "import numpy as np\nimport os,sys, re\nimport pandas as pd\nfrom IPython.display import Markdown\nimport matplotlib\nmatplotlib.rcParams.update({'font.size': 20})\n%pylab inline\nsys.path.append(os.path.abspath('../libraries/'))\nfrom summary_utils import *", "I have generated iupred results for the whole proteome since we didn't have the scores.", "def read_iupred_results(fileName):\n '''\n function read files containing the scores from iupred into \n the hash results\n INPUT: fileName\n results (hash)\n '''\n results = {}\n f = open(fileName, \"r\")\n while True:\n try:\n k,v = next(f), next(f)\n k = k.strip()[1:]\n results[k] = v.strip().split(',')[:-1]\n except StopIteration:\n break\n f.close()\n return results\n\n\nimport os,re\nfiles = [i for i in os.listdir('../scripts/iupred/') if re.search(\".results$\",i)]\n\ndisorders = {}\nfor i in files:\n d = read_iupred_results('../scripts/iupred/'+i)\n disorders.update(d)", "These are old predictions that include sequences and scores from our deep learning model", "# forgot to add the confidence of secondary_structure_oredictions...\npath = '../scripts/fastas/'\ntmp, tnp = [], []\nfor f in [i for i in os.listdir(path) if i[-11:]==\".output.csv\"]:\n predsName = f + \".predictions.npz\"\n\n df = pd.read_csv(path + f, index_col=0)\n tmp.append(df[['sequence','secondStruct','disorder']])\n nf = np.load(path + predsName)\n tnp.append(nf[nf.files[0]])\n\npredictions = np.hstack(tnp)\ndf = pd.concat(tmp)\n\n# finally join all fields into a single data structure to facilitate further analysis\ndf['predictions'] = predictions", "Here I joined both datasets", "df2 = pd.DataFrame([disorders]).T\nidx = df2.index.intersection(df.index)\ndf2 = df2.loc[idx]\n\ndf = pd.concat([df.loc[idx],df2], axis=1)\ndf.columns = ['sequence', 'secondStruct', 'disorder', 'predictions', 'iupred']\n\ndel(df2)", "Have to define the set of Transcription factors or Nuclear proteins", " ## SGD ##\n# collect data from SGD \nSGD = pd.read_csv('https://downloads.yeastgenome.org/curation/chromosomal_feature/SGD_features.tab', index_col=3, sep='\\t', header=None)\nSGD = SGD[SGD[1]=='ORF'][4]\n\n ## TF ##\n# Steve's list of TFs\n# long list including potential NON-TF\ntf_full = pd.read_csv('../data/TFs.csv')\ntf_full = tf_full['Systematic name'].values\n\n# short list excluding potential False TF\ntf_short = pd.read_csv('../data/TFs_small.csv')\ntf_short = tf_short['Systematic name'].values\n\n ## Nuclear ##\n# Are tf enriched in the Nucleus?\nlocalization = pd.read_csv('../data/localization/proteomesummarylatestversion_localisation.csv', index_col=0)\nX = localization.iloc[:,1] \nnuclear = [i for i in set(X) if re.search(\"nucl\",i)]\nX = pd.DataFrame([1 if i in nuclear else 0 for i in X], index=localization.index, columns=['loc'])\nnuclear = X[X['loc']==1].index\n\n\ntotal_idx = df.index.intersection(X.index)\nnuclear_idx = nuclear.intersection(total_idx)\ntf_full_idx = set(tf_full).intersection(total_idx)\ntf_short_idx = set(tf_short).intersection(total_idx)\n\nprint('{} in tf_full\\n{} in tf_short\\n{} in total\\n{} in nuclear\\n'.format(\n len(tf_full_idx), len(tf_short_idx), len(total_idx), len(nuclear_idx)))", "Predict TADs from the proteome", "# load NN model and weights\nfrom keras.models import model_from_json\n\n# open json model and weights\nwith open(\"../models/deep_model.json\", \"r\") as json_file:\n json_model = json_file.read()\n\ndeep_model = model_from_json(json_model)\ndeep_model.load_weights(\"../models/deep_model.h5\")\n\n# set cutoff to predict TADs in the proteome\ncutoff=0.8 \n\nresults = np.zeros(shape=(df.shape[0],4))\nfor n,prot in enumerate(df.predictions):\n results[n] = predict_motif_statistics(prot, cutoff)\n \nresults = pd.DataFrame(results, index=df.index, columns = ['length', 'start_position', 'gral_mean', 'mean_longest_region'])", "In parsed disorder scores there are null values that have to be excluded", "fixed_disorder = []\nfor n,i in enumerate(df.iupred.values):\n i = [t for t in i if t!=\"\"]\n fixed_disorder.append(np.array(i).astype(float))\n \ndf.iupred = fixed_disorder\n\nlenCutoff = 5 # Threshold for defining a potential TAD (more than 5 contiguous residues with score.0.8)\nflanking = 100 # How many points to consider\nbins_tad = 20 # Pure legacy now. It's to show more clearly the TAD in the figure\n\n\n# Build distribution of lengths to use building the null hypothesis\nTADs_idx = results[results.length>lenCutoff].index.dropna().intersection(df.index)\nlengths = np.array([len(i) for i in df.loc[TADs_idx].sequence.values])\nlengths = np.hstack([lengths]*10) # allow for a bigger sampling to build null hypothesis\nnp.random.shuffle(lengths)\n\n# build disorder and helicity vectors\ndis_vector = np.hstack(df.loc[TADs_idx].iupred.values)\ndis_vector = np.hstack([np.ones(flanking), dis_vector, np.ones(flanking)]) # fix \"N\" and \"C\" terminal errors\n\nresult_dis_pre = np.zeros(shape=(len(lengths), flanking))\nresult_dis_tad = np.zeros(shape=(len(lengths), bins_tad))\nresult_dis_post = np.zeros(shape=(len(lengths), flanking))\n\n##########################################\n### Build null hypothesis distributions ##\n##########################################\n\n\n# random start sites\nnp.random.seed(42) # set random seed for reproducibility\nrand_starts = np.random.uniform(low=101,high=len(dis_vector)-1000, size=len(lengths)).astype(int)\n\n# Null Hypothesis disOrder and helIcity\nfor n,(i,j) in enumerate(zip(rand_starts, lengths)):\n result_dis_pre[n] = dis_vector[i-flanking:i]\n result_dis_tad[n] = np.median(dis_vector[i-flanking:i+j-flanking]) \n result_dis_post[n] = dis_vector[i+j-flanking:i+j]\n\ndis = np.hstack([result_dis_pre, result_dis_tad, result_dis_post]).T\nmedians_dis_random = np.array([np.percentile(i,50) for i in dis])\n_25_dis_random = np.array([np.percentile(i,25) for i in dis])\n_75_dis_random = np.array([np.percentile(i,75) for i in dis])\n\nlenCutoff = 5 # Threshold for defining a potential TAD (more than 5 contiguous residues with score.0.8)\nflanking = 100 # How many points to consider\nbins_tad = 20 # Pure legacy now. It's to show more clearly the TAD in the figure\n\n\nTADs = results[results.length>lenCutoff]\n\n# use only the nuclear TADs\nTADs = TADs.loc[TADs.index.intersection(tf_short_idx)]\n\nresult_dis_pre = np.zeros(shape=(len(TADs), flanking))\nresult_dis_tad = np.zeros(shape=(len(TADs), bins_tad))\nresult_dis_post = np.zeros(shape=(len(TADs), flanking))\n\n# Null Hypothesis disOrder and helIcity\nfor n,(i,j,k) in enumerate(zip(TADs.start_position.values.astype(int), \n TADs.length.values.astype(int), \n TADs.index.dropna())):\n \n dis = np.array(df.iupred.loc[k]).astype(float)\n dis = np.hstack([np.ones(flanking), dis, np.ones(flanking)]) # fix \"N\" and \"C\" terminal errors\n hel = df.secondStruct.loc[k]\n \n i +=100 # part of fixing the \"N\" terminal\n \n result_dis_pre[n] = dis[i-flanking:i]\n result_dis_tad[n] = np.median(dis[i:i+j])\n result_dis_post[n] = dis[i+j:i+j+flanking]\n\ndis = np.hstack([result_dis_pre, result_dis_tad, result_dis_post]).T\nmedians_dis_tad = np.array([np.percentile(i,50) for i in dis])\n_25_dis_tad = np.array([np.percentile(i,25) for i in dis])\n_75_dis_tad = np.array([np.percentile(i,75) for i in dis])\n\ndef plotit(ax, medians, _25, _75, title):\n ax.fill_between( np.arange(len(_25)),_75,_25, alpha=0.3, color='gray')\n ax.plot(medians, label=\"50%\", lw=3, c='k')\n ax.set_xticks([100,120])\n ax.set_xticklabels([\"\", \"\"])\n ax.text(40, -0.1, \"pre-tad\")\n ax.text(100, -0.1, \"TAD\")\n ax.text(150, -0.1, \"post-TAD\")\n ax.set_title(title)\n #ax.set_ylim(-0.01,1)\n\nplt.figure(figsize=(11,5))\nax = plt.subplot(1,2,1)\nplotit(ax,medians_dis_tad, _25_dis_tad, _75_dis_tad, 'tads')\nplt.ylim(0,1)\nax = plt.subplot(1,2,2)\nplotit(ax,medians_dis_random, _25_dis_random, _75_dis_random, 'random')\nplt.ylim(0,1)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
scienceguyrob/Docker
Images/music/samples/libROSA/LibROSA_Demo.ipynb
gpl-3.0
[ "Librosa demo\nThis notebook demonstrates some of the basic functionality of librosa version 0.4.\nFollowing through this example, you'll learn how to:\n\nLoad audio input\nCompute mel spectrogram, MFCC, delta features, chroma\nLocate beat events\nCompute beat-synchronous features\nDisplay features\nSave beat tracker output to a CSV file", "from __future__ import print_function\n\n# We'll need numpy for some mathematical operations\nimport numpy as np\n\n\n# matplotlib for displaying the output\nimport matplotlib.pyplot as plt\nimport matplotlib.style as ms\nms.use('seaborn-muted')\n%matplotlib inline\n\n\n# and IPython.display for audio output\nimport IPython.display\n\n\n# Librosa for audio\nimport librosa\n# And the display module for visualization\nimport librosa.display\n\naudio_path = librosa.util.example_audio_file()\n\n# or uncomment the line below and point it at your favorite song:\n#\n# audio_path = '/path/to/your/favorite/song.mp3'\n\ny, sr = librosa.load(audio_path)", "By default, librosa will resample the signal to 22050Hz.\nYou can change this behavior by saying:\nlibrosa.load(audio_path, sr=44100)\nto resample at 44.1KHz, or\nlibrosa.load(audio_path, sr=None)\nto disable resampling.\nMel spectrogram\nThis first step will show how to compute a Mel spectrogram from an audio waveform.", "# Let's make and display a mel-scaled power (energy-squared) spectrogram\nS = librosa.feature.melspectrogram(y, sr=sr, n_mels=128)\n\n# Convert to log scale (dB). We'll use the peak power as reference.\nlog_S = librosa.logamplitude(S, ref_power=np.max)\n\n# Make a new figure\nplt.figure(figsize=(12,4))\n\n# Display the spectrogram on a mel scale\n# sample rate and hop length parameters are used to render the time axis\nlibrosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel')\n\n# Put a descriptive title on the plot\nplt.title('mel power spectrogram')\n\n# draw a color bar\nplt.colorbar(format='%+02.0f dB')\n\n# Make the figure layout compact\nplt.tight_layout()", "Harmonic-percussive source separation\nBefore doing any signal analysis, let's pull apart the harmonic and percussive components of the audio. This is pretty easy to do with the effects module.", "y_harmonic, y_percussive = librosa.effects.hpss(y)\n\n# What do the spectrograms look like?\n# Let's make and display a mel-scaled power (energy-squared) spectrogram\nS_harmonic = librosa.feature.melspectrogram(y_harmonic, sr=sr)\nS_percussive = librosa.feature.melspectrogram(y_percussive, sr=sr)\n\n# Convert to log scale (dB). We'll use the peak power as reference.\nlog_Sh = librosa.logamplitude(S_harmonic, ref_power=np.max)\nlog_Sp = librosa.logamplitude(S_percussive, ref_power=np.max)\n\n# Make a new figure\nplt.figure(figsize=(12,6))\n\nplt.subplot(2,1,1)\n# Display the spectrogram on a mel scale\nlibrosa.display.specshow(log_Sh, sr=sr, y_axis='mel')\n\n# Put a descriptive title on the plot\nplt.title('mel power spectrogram (Harmonic)')\n\n# draw a color bar\nplt.colorbar(format='%+02.0f dB')\n\nplt.subplot(2,1,2)\nlibrosa.display.specshow(log_Sp, sr=sr, x_axis='time', y_axis='mel')\n\n# Put a descriptive title on the plot\nplt.title('mel power spectrogram (Percussive)')\n\n# draw a color bar\nplt.colorbar(format='%+02.0f dB')\n\n# Make the figure layout compact\nplt.tight_layout()", "Chromagram\nNext, we'll extract Chroma features to represent pitch class information.", "# We'll use a CQT-based chromagram here. An STFT-based implementation also exists in chroma_cqt()\n# We'll use the harmonic component to avoid pollution from transients\nC = librosa.feature.chroma_cqt(y=y_harmonic, sr=sr)\n\n# Make a new figure\nplt.figure(figsize=(12,4))\n\n# Display the chromagram: the energy in each chromatic pitch class as a function of time\n# To make sure that the colors span the full range of chroma values, set vmin and vmax\nlibrosa.display.specshow(C, sr=sr, x_axis='time', y_axis='chroma', vmin=0, vmax=1)\n\nplt.title('Chromagram')\nplt.colorbar()\n\nplt.tight_layout()", "MFCC\nMel-frequency cepstral coefficients are commonly used to represent texture or timbre of sound.", "# Next, we'll extract the top 13 Mel-frequency cepstral coefficients (MFCCs)\nmfcc = librosa.feature.mfcc(S=log_S, n_mfcc=13)\n\n# Let's pad on the first and second deltas while we're at it\ndelta_mfcc = librosa.feature.delta(mfcc)\ndelta2_mfcc = librosa.feature.delta(mfcc, order=2)\n\n# How do they look? We'll show each in its own subplot\nplt.figure(figsize=(12, 6))\n\nplt.subplot(3,1,1)\nlibrosa.display.specshow(mfcc)\nplt.ylabel('MFCC')\nplt.colorbar()\n\nplt.subplot(3,1,2)\nlibrosa.display.specshow(delta_mfcc)\nplt.ylabel('MFCC-$\\Delta$')\nplt.colorbar()\n\nplt.subplot(3,1,3)\nlibrosa.display.specshow(delta2_mfcc, sr=sr, x_axis='time')\nplt.ylabel('MFCC-$\\Delta^2$')\nplt.colorbar()\n\nplt.tight_layout()\n\n# For future use, we'll stack these together into one matrix\nM = np.vstack([mfcc, delta_mfcc, delta2_mfcc])", "Beat tracking\nThe beat tracker returns an estimate of the tempo (in beats per minute) and frame indices of beat events.\nThe input can be either an audio time series (as we do below), or an onset strength envelope as calculated by librosa.onset.onset_strength().", "# Now, let's run the beat tracker.\n# We'll use the percussive component for this part\nplt.figure(figsize=(12, 6))\ntempo, beats = librosa.beat.beat_track(y=y_percussive, sr=sr)\n\n# Let's re-draw the spectrogram, but this time, overlay the detected beats\nplt.figure(figsize=(12,4))\nlibrosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel')\n\n# Let's draw transparent lines over the beat frames\nplt.vlines(librosa.frames_to_time(beats),\n 1, 0.5 * sr,\n colors='w', linestyles='-', linewidth=2, alpha=0.5)\n\nplt.axis('tight')\n\nplt.colorbar(format='%+02.0f dB')\n\nplt.tight_layout()", "By default, the beat tracker will trim away any leading or trailing beats that don't appear strong enough. \nTo disable this behavior, call beat_track() with trim=False.", "print('Estimated tempo: %.2f BPM' % tempo)\n\nprint('First 5 beat frames: ', beats[:5])\n\n# Frame numbers are great and all, but when do those beats occur?\nprint('First 5 beat times: ', librosa.frames_to_time(beats[:5], sr=sr))\n\n# We could also get frame numbers from times by librosa.time_to_frames()", "Beat-synchronous feature aggregation\nOnce we've located the beat events, we can use them to summarize the feature content of each beat.\nThis can be useful for reducing data dimensionality, and removing transient noise from the features.", "# feature.sync will summarize each beat event by the mean feature vector within that beat\n\nM_sync = librosa.util.sync(M, beats)\n\nplt.figure(figsize=(12,6))\n\n# Let's plot the original and beat-synchronous features against each other\nplt.subplot(2,1,1)\nlibrosa.display.specshow(M)\nplt.title('MFCC-$\\Delta$-$\\Delta^2$')\n\n# We can also use pyplot *ticks directly\n# Let's mark off the raw MFCC and the delta features\nplt.yticks(np.arange(0, M.shape[0], 13), ['MFCC', '$\\Delta$', '$\\Delta^2$'])\n\nplt.colorbar()\n\nplt.subplot(2,1,2)\n# librosa can generate axis ticks from arbitrary timestamps and beat events also\nlibrosa.display.specshow(M_sync, x_axis='time',\n x_coords=librosa.frames_to_time(librosa.util.fix_frames(beats)))\n\nplt.yticks(np.arange(0, M_sync.shape[0], 13), ['MFCC', '$\\Delta$', '$\\Delta^2$']) \nplt.title('Beat-synchronous MFCC-$\\Delta$-$\\Delta^2$')\nplt.colorbar()\n\nplt.tight_layout()\n\n# Beat synchronization is flexible.\n# Instead of computing the mean delta-MFCC within each beat, let's do beat-synchronous chroma\n# We can replace the mean with any statistical aggregation function, such as min, max, or median.\n\nC_sync = librosa.util.sync(C, beats, aggregate=np.median)\n\nplt.figure(figsize=(12,6))\n\nplt.subplot(2, 1, 1)\nlibrosa.display.specshow(C, sr=sr, y_axis='chroma', vmin=0.0, vmax=1.0, x_axis='time')\n\nplt.title('Chroma')\nplt.colorbar()\n\nplt.subplot(2, 1, 2)\nlibrosa.display.specshow(C_sync, y_axis='chroma', vmin=0.0, vmax=1.0, x_axis='time', \n x_coords=librosa.frames_to_time(librosa.util.fix_frames(beats)))\n\n\nplt.title('Beat-synchronous Chroma (median aggregation)')\n\nplt.colorbar()\nplt.tight_layout()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ivastar/clear
notebooks/grizli/grizli_retrieve_and_prep.ipynb
mit
[ "This notebook shows how to use grizli to\nretrieve and pre-process raw CLEAR G102/F105W and 3D-HST G141/F140W observations for a single CLEAR pointing (GS1).\nThese series of notebooks draw heavily from Gabe Brammer's existing grizli notebooks, which are available at https://github.com/gbrammer/grizli/tree/master/examples, but with examples specific for the CLEAR survey.", "import grizli\n\ntry: \n from mastquery import query, overlaps\n use_mquery = True\nexcept: \n from hsaquery import query, overlaps\n use_mquery = False\n\nimport os\nimport numpy as np\nfrom IPython.display import Image\nfrom grizli.pipeline import auto_script\nimport glob\nfrom glob import glob\nimport astropy\nfrom grizli.prep import process_direct_grism_visit\nfrom astropy.io import fits", "<h1><center>Initialize Directories</center></h1>\n\nThe following paths need to be changed for your filesystem. [HOME_PATH] is where the raw data, reduced data, and grizli outputs will be stored. [PATH_TO_CATS] is where the catalogs are stored and must include the following:\n ### reference mosaic image (e.g., goodss-F105W-astrodrizzle-v4.3_drz_sci.fits)\n ### segmentation map (e.g., Goods_S_plus_seg.fits)\n ### source catalog (e.g., goodss-F105W-astrodrizzle-v4.3_drz_sub_plus.cat)\n ### radec_catalog (e.g., goodsS_radec.cat)\n ### 3DHST Eazy Catalogs (e.g., goodss_3dhst.v4.1.cats/*)\n\nthe [PATH_TO_CATS] files are available on the team archive: https://archive.stsci.edu/pub/clear_team/INCOMING/for_hackday/", "field = 'GS1'\nref_filter = 'F105W'\n\nHOME_PATH = '/Users/rsimons/Desktop/clear/for_hackday/%s'%field\nPATH_TO_CATS = '/Users/rsimons/Desktop/clear/Catalogs'\n\n# Create [HOME_PATH] and [HOME_PATH]/query_results directories if they do not already exist\nif not os.path.isdir(HOME_PATH): os.system('mkdir %s'%HOME_PATH)\nif not os.path.isdir(HOME_PATH + '/query_results'): os.system('mkdir %s/query_results'%HOME_PATH)\n\n# Move to the [HOME_PATH] directory\nos.chdir(HOME_PATH)\n", "<h1><center>Query MAST</center></h1>\n\nRun an initial query for all raw G102 data in the MAST archive from the proposal ID 14227 with a target name that includes the phrase 'GS1' (i.e., GS1 pointing of CLEAR).", "# proposal_id = [14227] is CLEAR\nparent = query.run_query(box = None, proposal_id = [14227], instruments=['WFC3/IR', 'ACS/WFC'], \n filters = ['G102'], target_name = 'GS1')", "Next, find all G102 and G141 observations that overlap with the pointings found in the initial query.", "# Find all G102 and G141 observations overlapping the parent query in the archive\ntabs = overlaps.find_overlaps(parent, buffer_arcmin=0.01, \n filters=['G102', 'G141'], \n instruments=['WFC3/IR','WFC3/UVIS','ACS/WFC'], close=False)\n\nfootprint_fits_file = glob('*footprint.fits')[0]\njtargname = footprint_fits_file.strip('_footprint.fits')\n\n\n# A list of the target names\nfp_fits = fits.open(footprint_fits_file)\noverlapping_target_names = set(fp_fits[1].data['target'])\n\n\n# Move the footprint figure files to $HOME_PATH/query_results/ so that they are not overwritten\nos.system('cp %s/%s_footprint.fits %s/query_results/%s_footprint_%s.fits'%(HOME_PATH, jtargname, HOME_PATH, jtargname, 'all_G102_G141'))\nos.system('cp %s/%s_footprint.npy %s/query_results/%s_footprint_%s.npy'%(HOME_PATH, jtargname, HOME_PATH, jtargname, 'all_G102_G141'))\nos.system('cp %s/%s_footprint.pdf %s/query_results/%s_footprint_%s.pdf'%(HOME_PATH, jtargname, HOME_PATH, jtargname, 'all_G102_G141'))\nos.system('cp %s/%s_info.dat %s/query_results/%s_info_%s.dat'%(HOME_PATH, jtargname, HOME_PATH, jtargname, 'all_G102_G141'))\n\n\n# Table summary of query\ntabs[0]", "<h1><center>Retrieve raw data from MAST</center></h1>\n\nWe now have a list of G102 and G141 observations in the MAST archive that overlap with the GS1 pointing of CLEAR.\nFor each, retrieve all associated RAW grism G102/G141 and direct imaging F098M/F105W/F125W/F140W data from MAST.\n**For GS1, the retrieval step takes about 30 minutes to run and requires 1.9 GB of space.", "# Loop targ_name by targ_name\nfor t, targ_name in enumerate(overlapping_target_names):\n if use_mquery:\n extra = {'target_name':targ_name}\n else:\n extra = query.DEFAULT_EXTRA.copy()\n extra += [\"TARGET.TARGET_NAME LIKE '%s'\"%targ_name]\n \n # search the MAST archive again, this time looking for \n # all grism and imaging observations with the given target name\n tabs = overlaps.find_overlaps(parent, buffer_arcmin=0.01, \n filters=['G102', 'G141', 'F098M', 'F105W', 'F125W', 'F140W'], \n instruments=['WFC3/IR','WFC3/UVIS','ACS/WFC'], \n extra=extra, close=False)\n if False:\n # retrieve raw data from MAST\n s3_status = os.system('aws s3 ls s3://stpubdata --request-payer requester')\n auto_script.fetch_files(field_root=jtargname, HOME_PATH=HOME_PATH, remove_bad=True, \n reprocess_parallel=True, s3_sync=(s3_status == 0))\n\n # Move the figure files to $HOME_PATH/query_results/ so that they are not overwritten\n os.system('mv %s/%s_footprint.fits %s/query_results/%s_footprint_%s.fits'%(HOME_PATH, jtargname, HOME_PATH, jtargname, targ_name))\n os.system('mv %s/%s_footprint.npy %s/query_results/%s_footprint_%s.npy'%(HOME_PATH, jtargname, HOME_PATH, jtargname, targ_name))\n os.system('mv %s/%s_footprint.pdf %s/query_results/%s_footprint_%s.pdf'%(HOME_PATH, jtargname, HOME_PATH, jtargname, targ_name))\n os.system('mv %s/%s_info.dat %s/query_results/%s_info_%s.dat'%(HOME_PATH, jtargname, HOME_PATH, jtargname, targ_name))\n\n os.chdir(HOME_PATH)", "The following directories are created from auto_script.fetch_files:\n [HOME_PATH]/j0333m2742\n [HOME_PATH]/j0333m2742/RAW\n [HOME_PATH]/j0333m2742/Prep\n [HOME_PATH]/j0333m2742/Extractions\n [HOME_PATH]/j0333m2742/Persistance\n\nRAW/ is where the downloaded raw and pre-processed data are stored.\nPrep/ is the general working directory for processing and analyses.", "PATH_TO_RAW = glob(HOME_PATH + '/*/RAW')[0]\nPATH_TO_PREP = glob(HOME_PATH + '/*/PREP')[0]\n\n# Move to the Prep directory\nos.chdir(PATH_TO_PREP)", "Extract exposure information from downloaded flt files", "# Find all pre-processed flt files in the RAW directory\nfiles = glob('%s/*flt.fits'%PATH_TO_RAW)\n# Generate a table from the headers of the flt fits files\ninfo = grizli.utils.get_flt_info(files)", "The info table includes relevant exposure details: e.g., filter, instrument, targetname, PA, RA, DEC.\nPrint the first three rows of the table.", "info[0:3]", "Next, we use grizli to parse the headers of the downloaded flt files in RAW/ and sort them into \"visits\". Each visit represents a specific pointing + orient + filter and contains the list of its associated exposure files.", "# Parse the table and group exposures into associated \"visits\"\nvisits, filters = grizli.utils.parse_flt_files(info=info, uniquename=True)\n\n# an F140W imaging visit\nprint ('\\n\\n visits[0]\\n\\t product: ', visits[0]['product'], '\\n\\t files: ', visits[0]['files'])\n\n# a g141 grism visit\nprint ('\\n\\n visits[1]\\n\\t product: ', visits[1]['product'], '\\n\\t files: ', visits[1]['files'])\n\n", "<h1><center>Pre-process raw data</center></h1>\n\nWe are now ready to pre-process the raw data we downloaded from MAST.\nprocess_direct_grism_visit performs all of the necessary pre-processing:\n\nCopying the flt files from Raw/ to Prep/\nAstrometric registration/correction\nGrism sky background subtraction and flat-fielding\nExtract visit-level catalogs and segmentation images from the direct imaging\n\nThe final products are:\n\n\nAligned, background-subtracted FLTS\n\n\nDrizzled mosaics of direct and grism images", "if 'N' in field.upper(): radec_catalog = PATH_TO_CATS + '/goodsN_radec.cat'\nif 'S' in field.upper(): radec_catalog = PATH_TO_CATS + '/goodsS_radec.cat' \n\nproduct_names = np.array([visit['product'] for visit in visits])\nfilter_names = np.array([visit['product'].split('-')[-1] for visit in visits])\nbasenames = np.array([visit['product'].split('.')[0]+'.0' for visit in visits])\n\n# First process the G102/F105W visits, then G141/F140W\nfor ref_grism, ref_filter in [('G102', 'F105W'), ('G141', 'F140W')]:\n print ('Processing %s + %s visits'%(ref_grism, ref_filter))\n for v, visit in enumerate(visits):\n product = product_names[v]\n basename = basenames[v]\n filt1 = filter_names[v]\n field_in_contest = basename.split('-')[0]\n if (ref_filter.lower() == filt1.lower()):\n #Found a direct image, now search for grism counterpart\n grism_index= np.where((basenames == basename) & (filter_names == ref_grism.lower()))[0][0]\n if True:\n # run the pre-process script\n status = process_direct_grism_visit(direct = visit,\n grism = visits[grism_index],\n radec = radec_catalog, \n align_mag_limits = [14, 23])\n\n", "<h1><center>Examining outputs from the pre-processing steps</center></h1>\n\nAstrometric Registration", "os.chdir(PATH_TO_PREP)\n!cat gs1-cxt-09-227.0-f105w_wcs.log\nImage(filename = PATH_TO_PREP + '/gs1-cxt-09-227.0-f105w_wcs.png', width = 600, height = 600)", "Grism sky subtraction", "os.chdir(PATH_TO_PREP)\nImage(filename = PATH_TO_PREP + '/gs1-cxt-09-227.0-g102_column.png', width = 600, height = 600)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/nerc/cmip6/models/hadgem3-gc31-hh/toplevel.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: NERC\nSource ID: HADGEM3-GC31-HH\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:26\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hh', 'toplevel')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Flux Correction\n3. Key Properties --&gt; Genealogy\n4. Key Properties --&gt; Software Properties\n5. Key Properties --&gt; Coupling\n6. Key Properties --&gt; Tuning Applied\n7. Key Properties --&gt; Conservation --&gt; Heat\n8. Key Properties --&gt; Conservation --&gt; Fresh Water\n9. Key Properties --&gt; Conservation --&gt; Salt\n10. Key Properties --&gt; Conservation --&gt; Momentum\n11. Radiative Forcings\n12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\n13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\n14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\n15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\n16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\n17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\n18. Radiative Forcings --&gt; Aerosols --&gt; SO4\n19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\n20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\n21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\n22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\n23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\n24. Radiative Forcings --&gt; Aerosols --&gt; Dust\n25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\n26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\n27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\n28. Radiative Forcings --&gt; Other --&gt; Land Use\n29. Radiative Forcings --&gt; Other --&gt; Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop level overview of coupled model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of coupled model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nYear the model was released", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. CMIP3 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP3 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. CMIP5 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP5 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Previous Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPreviously known as", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.4. Components Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.5. Coupler\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nOverarching coupling framework for model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Coupling\n**\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of coupling in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Atmosphere Double Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhere are the air-sea fluxes calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.4. Atmosphere Relative Winds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.5. Energy Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.6. Fresh Water Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Conservation --&gt; Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.6. Land Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation --&gt; Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Runoff\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how runoff is distributed and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Iceberg Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Endoreic Basins\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Snow Accumulation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Key Properties --&gt; Conservation --&gt; Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Key Properties --&gt; Conservation --&gt; Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how momentum is conserved in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Equivalence Concentration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of any equivalence concentrations used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Radiative Forcings --&gt; Aerosols --&gt; SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.3. RFaci From Sulfate Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "24. Radiative Forcings --&gt; Aerosols --&gt; Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Radiative Forcings --&gt; Other --&gt; Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28.2. Crop Change Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLand use change represented via crop change only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Radiative Forcings --&gt; Other --&gt; Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow solar forcing is provided", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
myinxd/agn-ae
code-sdss/SDSS_Analysis-KS-Chi2-tests.ipynb
mit
[ "import os\nimport numpy as np\nimport pandas as pd\nimport utils_sdss as utils\n\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n# plt.style.use(\"ggplot\")\n%matplotlib inline\n\n# load data\n# load the unLRG sample list\nlistpath = \"./BH_SDSS_cross_checked.xlsx\"\ndata = pd.read_excel(listpath, \"Sheet2\")\n\nextinction_u = data[\"extinction_u\"]\nextinction_g = data[\"extinction_g\"]\nextinction_r = data[\"extinction_r\"]\nextinction_i = data[\"extinction_i\"]\nextinction_z = data[\"extinction_z\"]\n\ncmodelmag_u = np.nan_to_num(data[\"cmodelmag_u\"])\ncmodelmag_g = np.nan_to_num(data[\"cmodelmag_g\"])\ncmodelmag_r = np.nan_to_num(data[\"cmodelmag_r\"])\ncmodelmag_i = np.nan_to_num(data[\"cmodelmag_i\"])\ncmodelmag_z = np.nan_to_num(data[\"cmodelmag_z\"])\n\ncmodelmagerr_u = np.nan_to_num(data[\"cmodelmagerr_u\"])\ncmodelmagerr_g = np.nan_to_num(data[\"cmodelmagerr_g\"])\ncmodelmagerr_r = np.nan_to_num(data[\"cmodelmagerr_r\"])\ncmodelmagerr_i = np.nan_to_num(data[\"cmodelmagerr_i\"])\ncmodelmagerr_z = np.nan_to_num(data[\"cmodelmagerr_z\"])\n\n# exclude bad sample\nidx_u1 = np.where(cmodelmag_u != -9999)[0]\nidx_u2 = np.where(cmodelmag_u != 0.0)[0]\nidx_u3 = np.where(cmodelmag_u != 10000)[0]\nidx = np.intersect1d(idx_u1,idx_u2)\nidx = np.intersect1d(idx, idx_u3)\n\n# load reconstruct k_correction\nwith open(\"../../result-171102/sdss/reconf_unLRG.dat\", 'r') as fp:\n reconmags = fp.readlines()\n\n# calc absolute magnitudes and save into excels\nredshift = data[\"z\"]\nmag_abs = np.ones((len(redshift),))*10000\nfor j,i in enumerate(idx):\n z = redshift[i]\n dl = utils.calc_luminosity_distance(z) # luminosity distance [Mpc]\n mags = [cmodelmag_u[i],cmodelmag_g[i],cmodelmag_r[i],cmodelmag_r[i],cmodelmag_z[i]]\n exts = [extinction_u[i],extinction_g[i],extinction_r[i],extinction_r[i],extinction_z[i]]\n reconmag = reconmags[j].split(\" \")\n mag_abs[i] = utils.calc_absmag(mags[2]-exts[2],dl.value,float(reconmag[3]))\n\nmode = data[\"Type\"]\nidx1 = np.where(mode == 1)[0]\nidx2 = np.where(mode == 2)[0]\nidx3 = np.where(mode == 3)[0]\nidx4 = np.where(mode == 4)[0]\nidx5 = np.where(mode == 5)[0]\nidx6 = np.where(mode == 6)[0]\nidx2_same = np.intersect1d(idx,idx2)\nidx3_same = np.intersect1d(idx,idx3)\n\nBT = data[\"BT\"]\nidx_fr1 = np.where(BT == 1)[0]\nidx_fr2 = np.where(BT == 2)[0]\nidx_fr1 = np.intersect1d(idx, idx_fr1)\nidx_fr2 = np.intersect1d(idx, idx_fr2)\n\nflux = data[\"S_nvss\"]\nlumo = utils.flux_to_luminosity(redshift = redshift, flux = flux)", "Data", "lumo_fr1_typical = lumo[idx2_same] * 10**-22\nlumo_fr2_typical = lumo[idx3_same] * 10**-22\n\nmag_fr1_typical = mag_abs[idx2_same]\nmag_fr2_typical = mag_abs[idx3_same]\n\nlumo_fr1_like = lumo[idx_fr1] * 10**-22\nlumo_fr2_like = lumo[idx_fr2] * 10**-22\n\nmag_fr1_like = mag_abs[idx_fr1]\nmag_fr2_like = mag_abs[idx_fr2]\n\nmag_fr1 = np.hstack([mag_abs[idx_fr1], mag_abs[idx2_same]])\nmag_fr2 = np.hstack([mag_abs[idx_fr2], mag_abs[idx3_same]])\nlumo_fr1 = np.hstack([lumo[idx_fr1], lumo[idx2_same]]) * 10 ** -22\nlumo_fr2 = np.hstack([lumo[idx_fr2], lumo[idx3_same]]) * 10 ** -22", "Correlation analysis\n\nPearson: http://blog.csdn.net/hjh00/article/details/48230399\np-value: https://stackoverflow.com/questions/22306341/python-sklearn-how-to-calculate-p-values\nKolmogorov-Smirnov test: https://stackoverflow.com/questions/10884668/two-sample-kolmogorov-smirnov-test-in-python-scipy\nScipy.stats.kstest: https://docs.scipy.org/doc/scipy-0.7.x/reference/generated/scipy.stats.kstest.html", "import scipy.stats.stats as stats \nfrom sklearn.feature_selection import chi2\n\n# ks test\n# https://docs.scipy.org/doc/scipy-0.7.x/reference/generated/scipy.stats.ks_2samp.html#scipy.stats.ks_2sam\nlumo_ks_D_t,lumo_ks_p_t = stats.ks_2samp(lumo_fr1_typical,lumo_fr2_typical)\nprint(\"KS statistic of lumo: typical %.5f\" % lumo_ks_D_t)\nprint(\"P-value of lumo: typical %.5e\" % lumo_ks_p_t)\n\nmag_ks_D_t,mag_ks_p_t = stats.ks_2samp(mag_fr1_typical,mag_fr2_typical)\nprint(\"KS statistic of Mr: typical %.5f\" % mag_ks_D_t)\nprint(\"P-value of Mr: typical %.5e\" % mag_ks_p_t)\n\n# FR like\nlumo_ks_D_l,lumo_ks_p_l = stats.ks_2samp(lumo_fr1_like,lumo_fr2_like)\nprint(\"KS statistic of lumo: like %.5f\" % lumo_ks_D_l)\nprint(\"P-value of lumo: like %.5e\" % lumo_ks_p_l)\n\nmag_ks_D_l,mag_ks_p_l = stats.ks_2samp(mag_fr1_like,mag_fr2_like)\nprint(\"KS statistic of Mr: like %.5f\" % mag_ks_D_l)\nprint(\"P-value of Mr: like %.5e\" % mag_ks_p_l)\n\n# FR\nlumo_ks_D,lumo_ks_p = stats.ks_2samp(lumo_fr1,lumo_fr2)\nprint(\"KS statistic of lumo: %.5f\" % lumo_ks_D)\nprint(\"P-value of lumo: %.5e\" % lumo_ks_p)\n\nmag_ks_D,mag_ks_p = stats.ks_2samp(mag_fr1,mag_fr2)\nprint(\"KS statistic of Mr: %.5f\" % mag_ks_D)\nprint(\"P-value of Mr: %.5e\" % mag_ks_p)", "P-value非常小,而ks statistic数值较大,认为FRI/FRII有一定的可分性。即原假设FRI/FRII的射电光学和光度服从统一分布是错误的。\n但是,mag的D值,相对来说较小,说明光学数据上可分性没有luminosity高\nChi", "x_lumo = np.hstack((lumo_fr1,lumo_fr2))\nx_lumo.shape\n\nx_lumo = np.log10(np.hstack((lumo_fr1,lumo_fr2)))\nx_mag = np.hstack((mag_fr1,mag_fr2))\nx_lumo_norm = (x_lumo - x_lumo.min()) / (x_lumo.max() - x_lumo.min())\nx_mag_norm = (x_mag - x_mag.min()) / (x_mag.max() - x_mag.min())\n\nx = np.vstack([x_lumo_norm,x_mag_norm])\nx = x.transpose()\ny = np.zeros(len(mag_abs))\ny[idx2_same] = 1\ny[idx_fr1] = 1\ny[idx3_same] = 2\ny[idx_fr2] = 2\ny = y[np.where(y > 0)]\n\nscores, pvalues = chi2(x, y)\n\npvalues\n\nfrom scipy.stats import chisquare\n\nchisquare(x_lumo_norm, y)\n\nnp.random.seed(12222222)\nx = np.random.normal(0,1,size=(20000,))\ny = np.random.normal(0,1,size=(20000,))\nstats.ks_2samp(x,y)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AndreiBarsan/dm-notes
bonus/tutorial-bandits.ipynb
unlicense
[ "%matplotlib inline\nfrom __future__ import division, print_function\nfrom matplotlib import pyplot as plt\nimport numpy as np\nfrom ipywidgets import interact\n\ndef play(bandits, n_repetitions, mu, metric='average', noise=.3):\n mu_star = np.max(mu)\n rewards = {name: np.zeros(n_repetitions) for name in bandits}\n for i in range(n_repetitions):\n for name, bandit in bandits.items():\n arm = bandit.play()\n rewards[name][i] = mu[arm]\n bandit.feedback(arm, np.clip(mu[arm] + noise * np.random.randn(), 0, 1))\n \n plt.hold(True)\n for name, reward in rewards.items():\n if metric == 'regret':\n data = mu_star - reward\n elif metric == 'cumulative': # Also known as total regret.\n data = np.cumsum(mu_star - reward)\n else:\n assert metric == 'average'\n data = np.cumsum(mu_star - reward) / np.linspace(1, n_repetitions, n_repetitions)\n plt.plot(data, '--', label=name)\n \n plt.xlabel(\"Rounds\")\n plt.ylabel(\"Metric (%s)\" % metric)\n plt.legend()\n plt.show()", "$\\epsilon$-greedy\nThis strategy involves picking a (small) $\\epsilon$ and then at any stage after every arm has been played at least once, explore with a probability of $\\epsilon$, and exploit otherwise.\nFor a suitable choice of $\\epsilon_t$, $R_T = O(k \\log T)$, which means that $\\frac{R_T}{T} = O\\left( \\frac{\\log T}{T} \\right)$, which goes to 0 as T goes to $\\infty$.", "class EpsGreedy:\n def __init__(self, n_arms, eps=0):\n self.eps = eps\n self.n_arms = n_arms\n self.payoffs = np.zeros(n_arms)\n self.n_plays = np.zeros(n_arms)\n \n def play(self):\n # Note that the theory tells us to pick epsilon as O(1/t), not constant (which we use here).\n idx = np.argmin(self.n_plays)\n if self.n_plays[idx] == 0:\n return idx\n if np.random.rand() <= self.eps:\n return np.random.randint(self.n_arms)\n else:\n return np.argmax(self.payoffs / self.n_plays)\n\n def feedback(self, arm, reward):\n self.payoffs[arm] += reward\n self.n_plays[arm] += 1", "UCB1\nThis algorithms keeps track of the upper confidence bound for every arm, and always picks the arm with the best upper confidence bound.\nAt any point in time $t$, we know each arm's draw count $n_i^{(t)}$, as well as its average payoff $\\hat{\\mu}_i^{(t)}$. Based on this, we can compute every arm's upper confidence bound (or UCB):\n\\begin{equation}\n \\operatorname{UCB}(i) = \\hat{\\mu}_i + \\sqrt{\\frac{2\\ln t}{n_i}}\n\\end{equation}\nAlso no-regret, just like $\\epsilon$-greedy. The math is just a bit fluffier (see slide dm-11:20).", "class UCB:\n def __init__(self, n_arms, tau):\n self.n_arms = n_arms\n self.means = np.zeros(n_arms)\n # Note that the UCB1 algorithm has tau=1.\n self.n_plays = np.zeros(n_arms)\n self.tau = tau\n self.t = 0\n \n def play(self, plot=True):\n # If plot is true, it will plot the means + bounds every 100 iterations.\n self.t += 1\n idx = np.argmin(self.n_plays)\n if self.n_plays[idx] == 0:\n return idx\n \n ub = self.tau * np.sqrt(2 * np.log(self.t) / self.n_plays)\n ucb = self.means + ub\n \n if plot and self.t % 100 == 0:\n plt.errorbar(list(range(self.n_arms)), self.means, yerr=ub)\n plt.show()\n print('chose arm', np.argmax(ucb))\n\n return np.argmax(ucb)\n\n def feedback(self, arm, reward):\n self.n_plays[arm] += 1\n self.means[arm] += 1 / (self.n_plays[arm]) * (reward - self.means[arm])\n\n@interact(n_arms=(10, 100, 1), n_rounds=(100, 1000, 10), eps=(0, 1, .01) , tau=(0, 1, .01))\ndef run(n_arms, n_rounds, eps, tau):\n np.random.seed(123)\n # Initialize the arm payoffs.\n mu = np.random.randn(n_arms)\n # Some other strategies for sampling.\n # mu = np.random.standard_cauchy(n_arms)\n # mu = np.random.gamma(shape=.1, size=(n_arms, 1))\n mu = np.abs(mu)\n mu /= np.max(mu)\n plt.bar(list(range(n_arms)), mu)\n plt.xlabel('arms')\n plt.ylabel('rewards')\n plt.show()\n bandits = {\n 'eps-{0}'.format(eps) : EpsGreedy(n_arms, eps=eps),\n 'ucb-{0}'.format(tau) : UCB(n_arms, tau=tau)\n }\n play(bandits, n_rounds, mu)\n # Hint: You can also plot the upper bound from UCB1 and see how tight it is." ]
[ "code", "markdown", "code", "markdown", "code" ]
leosartaj/scipy-2016-tutorial
tutorial_exercises/04-Matrices.ipynb
bsd-3-clause
[ "from sympy import *\ninit_printing(use_latex='mathjax')\nx, y, z = symbols('x,y,z')\nr, theta = symbols('r,theta', positive=True)", "Matrices\nThe SymPy Matrix object helps us with small problems in linear algebra.", "rot = Matrix([[r*cos(theta), -r*sin(theta)],\n [r*sin(theta), r*cos(theta)]])\nrot", "Standard methods", "rot.det()\n\nrot.inv()\n\nrot.singular_values()", "Exercise\nFind the inverse of the following Matrix:\n$$ \\left[\\begin{matrix}1 & x\\y & 1\\end{matrix}\\right] $$", "# Create a matrix and use the `inv` method to find the inverse\n\n", "Operators\nThe standard SymPy operators work on matrices.", "rot * 2\n\nrot * rot\n\nv = Matrix([[x], [y]])\nv\n\nrot * v", "Exercise\nIn the last exercise you found the inverse of the following matrix", "M = Matrix([[1, x], [y, 1]])\nM\n\nM.inv()", "Now verify that this is the true inverse by multiplying the matrix times its inverse. Do you get the identity matrix back?", "# Multiply `M` by its inverse. Do you get back the identity matrix?\n\n", "Exercise\nWhat are the eigenvectors and eigenvalues of M?", "# Find the methods to compute eigenvectors and eigenvalues. Use these methods on `M`\n\n", "NumPy-like Item access", "rot[0, 0]\n\nrot[:, 0]\n\nrot[1, :]", "Mutation\nWe can change elements in the matrix.", "rot[0, 0] += 1\nrot\n\nsimplify(rot.det())\n\nrot.singular_values()", "Exercise\nPlay around with your matrix M, manipulating elements in a NumPy like way. Then try the various methods that we've talked about (or others). See what sort of answers you get.", "# Play with matrices\n\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/ml-pipeline-generator-python
examples/getting_started_notebook.ipynb
apache-2.0
[ "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "End to End Workflow with ML Pipeline Generator\n<table align=\"left\">\n <td>\n <a href=\"https://colab.sandbox.google.com/github/GoogleCloudPlatform/ml-pipeline-generator-python/blob/master/examples/getting_started_notebook.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/ml-pipeline-generator-python/blob/master/examples/getting_started_notebook.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n\nOverview\nML Pipeline Generator simplifies model building, training and deployment by generating the required training and deployment modules for your model. Using this tool, users with locally running scripts and notebooks can get started with AI Platform and Kubeflow Pipelines in a few steps, and will have the boilerplate code needed to customize their deployments and pipelines further.\n[Insert Pic]\nThis demo shows you how to train and deploy Machine Learning models on a sample dataset. The demo is divided into two parts:\n\nPreparing an SVM classifier for training on Cloud AI platform\nOrchestrating the training of a Tensorflow model on Kubeflow Pipelines\n\nDataset\nThis tutorial uses the United States Census Income Dataset provided by the UC Irvine Machine Learning Repository containing information about people from a 1994 Census database, including age, education, marital status, occupation, and whether they make more than $50,000 a year. The dataset consists of over 30k rows, where each row corresponds to a different person. For a given row, there are 14 features that the model conditions on to predict the income of the person. A few of the features are named above, and the exhaustive list can be found both in the dataset link above.\nSet up your local development environment\nIf you are using Colab or AI Platform Notebooks, your environment already meets\nall the requirements to run this notebook. If you are using AI Platform Notebook, make sure the machine configuration type is 1 vCPU, 3.75 GB RAM or above. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements.\nYou need the following:\n\nThe Google Cloud SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Google Cloud guide to Setting up a Python development\nenvironment and the Jupyter\ninstallation guide provide detailed instructions\nfor meeting these requirements. The following steps provide a condensed set of\ninstructions:\n\n\nInstall and initialize the Cloud SDK.\n\n\nInstall Python 3.\n\n\nInstall\n virtualenv\n and create a virtual environment that uses Python 3.\n\n\nActivate that environment and run pip install jupyter in a shell to install\n Jupyter.\n\n\nRun jupyter notebook in a shell to launch Jupyter.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nSet up your GCP project\nIf you do not have a GCP project then the following steps are required, regardless of your notebook environment.\n\n\nSelect or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nCreate a GCP bucket so that we can store files.\n\n\nPIP install packages and dependencies\nInstall addional dependencies not installed in Notebook environment\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.", "# Use the latest major GA version of the framework.\n! pip install --upgrade ml-pipeline-gen PyYAML", "Note: Try installing using sudo, if the above command throw any permission errors.\nRestart the kernel to allow the package to be imported for Jupyter Notebooks.\nAuthenticate your GCP account\nIf you are using AI Platform Notebooks, your environment is already\nauthenticated. Skip this step.\nOnly if you are on a local Juypter Notebook or Colab Environment, follow these steps:\n\n\nCreate a New Service Account.\n\n\nAdd the following roles: \n Compute Engine > Compute Admin, ML Engine > ML Engine Admin and Storage > Storage Object Admin.\n\n\nDownload a JSON file that contains your key and it will be stored in your\nlocal environment.", "# If you are on Colab, run this cell and upload your service account's\n# json key.\nimport os\nimport sys\n\nif 'google.colab' in sys.modules: \n from google.colab import files\n keyfile_upload = files.upload()\n keyfile = list(keyfile_upload.keys())[0]\n keyfile_path = os.path.abspath(keyfile)\n %env GOOGLE_APPLICATION_CREDENTIALS $keyfile_path\n ! gcloud auth activate-service-account --key-file $keyfile_path\n\n# If you are running this notebook locally, replace the string below \n# with the path to your service account key and run this cell \n# to authenticate your GCP account.\n\n%env GOOGLE_APPLICATION_CREDENTIALS /path/to/service/account\n! gcloud auth activate-service-account --key-file '/path/to/service/account'", "Before You Begin\nThe tool requires following Google Cloud APIs to be enabled:\n* Google Cloud Storage\n* Cloud AI Platform\n* Google Kubernetes Engine\nAdd your Project ID below, you can change the region below if you would like, but it is not a requirement.", "PROJECT_ID = \"[PROJECT-ID]\" #@param {type:\"string\"}\nCOMPUTE_REGION = \"us-central1\" # Currently only supported region.", "Also add your bucket name:", "BUCKET_NAME = \"[BUCKET-ID]\" #@param {type:\"string\"}\n\n!gcloud config set project {PROJECT_ID}", "The tool requires following Google Cloud APIs to be enabled:", "!gcloud services enable ml.googleapis.com \\\ncompute.googleapis.com \\\nstorage-component.googleapis.com", "Create a model locally\nIn this section we will create a model locally, which many users have. This section is done to illustrate the on-prem method of creating models and in the next section we will show how to train them on GCP so that you can leverage the benefits of the cloud like easy distributed training, paralllel hyperparameter tuning and fast, up-to-date accelerators.\nThe next block of code highlights how we will preprocess the census data. It is out of scope for this colab to dive into how the code works. All that is important is that the function load_data returns 4 values: the training features, the training predictor, the evaluation features and the evaluation predictor in that order (this function also uploads data into GCS). Run the hidden cell below.", "#@title\n# python3\n# Copyright 2019 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Train a simple TF classifier for MNIST dataset.\n\nThis example comes from the cloudml-samples keras demo.\ngithub.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/tf-keras\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nfrom six.moves import urllib\nimport tempfile\n\nimport numpy as np\nimport pandas as pd\nimport tensorflow.compat.v1 as tf\n\n\nDATA_DIR = os.path.join(tempfile.gettempdir(), \"census_data\")\nDATA_URL = (\"https://storage.googleapis.com/cloud-samples-data/ai-platform\"\n + \"/census/data/\")\nTRAINING_FILE = \"adult.data.csv\"\nEVAL_FILE = \"adult.test.csv\"\nTRAINING_URL = os.path.join(DATA_URL, TRAINING_FILE)\nEVAL_URL = os.path.join(DATA_URL, EVAL_FILE)\n\n_CSV_COLUMNS = [\n \"age\", \"workclass\", \"fnlwgt\", \"education\", \"education_num\",\n \"marital_status\", \"occupation\", \"relationship\", \"race\", \"gender\",\n \"capital_gain\", \"capital_loss\", \"hours_per_week\", \"native_country\",\n \"income_bracket\",\n]\n_LABEL_COLUMN = \"income_bracket\"\nUNUSED_COLUMNS = [\"fnlwgt\", \"education\", \"gender\"]\n\n_CATEGORICAL_TYPES = {\n \"workclass\": pd.api.types.CategoricalDtype(categories=[\n \"Federal-gov\", \"Local-gov\", \"Never-worked\", \"Private\", \"Self-emp-inc\",\n \"Self-emp-not-inc\", \"State-gov\", \"Without-pay\"\n ]),\n \"marital_status\": pd.api.types.CategoricalDtype(categories=[\n \"Divorced\", \"Married-AF-spouse\", \"Married-civ-spouse\",\n \"Married-spouse-absent\", \"Never-married\", \"Separated\", \"Widowed\"\n ]),\n \"occupation\": pd.api.types.CategoricalDtype([\n \"Adm-clerical\", \"Armed-Forces\", \"Craft-repair\", \"Exec-managerial\",\n \"Farming-fishing\", \"Handlers-cleaners\", \"Machine-op-inspct\",\n \"Other-service\", \"Priv-house-serv\", \"Prof-specialty\", \"Protective-serv\",\n \"Sales\", \"Tech-support\", \"Transport-moving\"\n ]),\n \"relationship\": pd.api.types.CategoricalDtype(categories=[\n \"Husband\", \"Not-in-family\", \"Other-relative\", \"Own-child\", \"Unmarried\",\n \"Wife\"\n ]),\n \"race\": pd.api.types.CategoricalDtype(categories=[\n \"Amer-Indian-Eskimo\", \"Asian-Pac-Islander\", \"Black\", \"Other\", \"White\"\n ]),\n \"native_country\": pd.api.types.CategoricalDtype(categories=[\n \"Cambodia\", \"Canada\", \"China\", \"Columbia\", \"Cuba\", \"Dominican-Republic\",\n \"Ecuador\", \"El-Salvador\", \"England\", \"France\", \"Germany\", \"Greece\",\n \"Guatemala\", \"Haiti\", \"Holand-Netherlands\", \"Honduras\", \"Hong\",\n \"Hungary\", \"India\", \"Iran\", \"Ireland\", \"Italy\", \"Jamaica\", \"Japan\",\n \"Laos\", \"Mexico\", \"Nicaragua\", \"Outlying-US(Guam-USVI-etc)\", \"Peru\",\n \"Philippines\", \"Poland\", \"Portugal\", \"Puerto-Rico\", \"Scotland\", \"South\",\n \"Taiwan\", \"Thailand\", \"Trinadad&Tobago\", \"United-States\", \"Vietnam\",\n \"Yugoslavia\"\n ]),\n \"income_bracket\": pd.api.types.CategoricalDtype(categories=[\n \"<=50K\", \">50K\"\n ])\n}\n\n\ndef _download_and_clean_file(filename, url):\n \"\"\"Downloads data from url, and makes changes to match the CSV format.\n\n The CSVs may use spaces after the comma delimters (non-standard) or include\n rows which do not represent well-formed examples. This function strips out\n some of these problems.\n\n Args:\n filename: filename to save url to\n url: URL of resource to download\n \"\"\"\n temp_file, _ = urllib.request.urlretrieve(url)\n with tf.io.gfile.GFile(temp_file, \"r\") as temp_file_object:\n with tf.io.gfile.GFile(filename, \"w\") as file_object:\n for line in temp_file_object:\n line = line.strip()\n line = line.replace(\", \", \",\")\n if not line or \",\" not in line:\n continue\n if line[-1] == \".\":\n line = line[:-1]\n line += \"\\n\"\n file_object.write(line)\n tf.io.gfile.remove(temp_file)\n\n\ndef download(data_dir):\n \"\"\"Downloads census data if it is not already present.\n\n Args:\n data_dir: directory where we will access/save the census data\n\n Returns:\n foo\n \"\"\"\n tf.io.gfile.makedirs(data_dir)\n\n training_file_path = os.path.join(data_dir, TRAINING_FILE)\n if not tf.io.gfile.exists(training_file_path):\n _download_and_clean_file(training_file_path, TRAINING_URL)\n\n eval_file_path = os.path.join(data_dir, EVAL_FILE)\n if not tf.io.gfile.exists(eval_file_path):\n _download_and_clean_file(eval_file_path, EVAL_URL)\n\n return training_file_path, eval_file_path\n\n\ndef upload(train_df, eval_df, train_path, eval_path):\n train_df.to_csv(os.path.join(os.path.dirname(train_path), TRAINING_FILE),\n index=False, header=False)\n eval_df.to_csv(os.path.join(os.path.dirname(eval_path), EVAL_FILE),\n index=False, header=False)\n\n\ndef preprocess(dataframe):\n \"\"\"Converts categorical features to numeric. Removes unused columns.\n\n Args:\n dataframe: Pandas dataframe with raw data\n\n Returns:\n Dataframe with preprocessed data\n \"\"\"\n dataframe = dataframe.drop(columns=UNUSED_COLUMNS)\n\n # Convert integer valued (numeric) columns to floating point\n numeric_columns = dataframe.select_dtypes([\"int64\"]).columns\n dataframe[numeric_columns] = dataframe[numeric_columns].astype(\"float32\")\n\n # Convert categorical columns to numeric\n cat_columns = dataframe.select_dtypes([\"object\"]).columns\n dataframe[cat_columns] = dataframe[cat_columns].apply(\n lambda x: x.astype(_CATEGORICAL_TYPES[x.name]))\n dataframe[cat_columns] = dataframe[cat_columns].apply(\n lambda x: x.cat.codes)\n return dataframe\n\n\ndef standardize(dataframe):\n \"\"\"Scales numerical columns using their means and standard deviation.\n\n Args:\n dataframe: Pandas dataframe\n\n Returns:\n Input dataframe with the numerical columns scaled to z-scores\n \"\"\"\n dtypes = list(zip(dataframe.dtypes.index, map(str, dataframe.dtypes)))\n for column, dtype in dtypes:\n if dtype == \"float32\":\n dataframe[column] -= dataframe[column].mean()\n dataframe[column] /= dataframe[column].std()\n return dataframe\n\n\ndef load_data(train_path=\"\", eval_path=\"\"):\n \"\"\"Loads data into preprocessed (train_x, train_y, eval_y, eval_y) dataframes.\n\n Args:\n train_path: Local or GCS path to uploaded train data to.\n eval_path: Local or GCS path to uploaded eval data to.\n\n Returns:\n A tuple (train_x, train_y, eval_x, eval_y), where train_x and eval_x are\n Pandas dataframes with features for training and train_y and eval_y are\n numpy arrays with the corresponding labels.\n \"\"\"\n # Download Census dataset: Training and eval csv files.\n training_file_path, eval_file_path = download(DATA_DIR)\n\n train_df = pd.read_csv(\n training_file_path, names=_CSV_COLUMNS, na_values=\"?\")\n eval_df = pd.read_csv(eval_file_path, names=_CSV_COLUMNS, na_values=\"?\")\n\n train_df = preprocess(train_df)\n eval_df = preprocess(eval_df)\n\n # Split train and eval data with labels. The pop method copies and removes\n # the label column from the dataframe.\n train_x, train_y = train_df, train_df.pop(_LABEL_COLUMN)\n eval_x, eval_y = eval_df, eval_df.pop(_LABEL_COLUMN)\n\n # Join train_x and eval_x to normalize on overall means and standard\n # deviations. Then separate them again.\n all_x = pd.concat([train_x, eval_x], keys=[\"train\", \"eval\"])\n all_x = standardize(all_x)\n train_x, eval_x = all_x.xs(\"train\"), all_x.xs(\"eval\")\n\n # Rejoin features and labels and upload to GCS.\n if train_path and eval_path:\n train_df = train_x.copy()\n train_df[_LABEL_COLUMN] = train_y\n eval_df = eval_x.copy()\n eval_df[_LABEL_COLUMN] = eval_y\n upload(train_df, eval_df, train_path, eval_path)\n\n # Reshape label columns for use with tf.data.Dataset\n train_y = np.asarray(train_y).astype(\"float32\").reshape((-1, 1))\n eval_y = np.asarray(eval_y).astype(\"float32\").reshape((-1, 1))\n\n return train_x, train_y, eval_x, eval_y\n\n", "Now we train the a sklearn SVM model on this data.", "from sklearn import svm\n\ntrain_x, train_y, eval_x, eval_y = load_data()\ntrain_y, eval_y = [np.ravel(x) for x in [train_y, eval_y]]\nclassifier = svm.SVC(C=1)\nclassifier.fit(train_x, train_y)\nscore = classifier.score(eval_x, eval_y)\nprint('Accuracy is {}'.format(score))", "Usually, the pipelines have more complexities to it, such as hyperparameter tuning. However, at the end we have a single model which is the best and which we want to serve in production.\nPreparing an SVM classifier for training on Cloud AI platform\nWe now have a model which we think is good, but we want to add this model onto GCP while at the same time adding additional features such as training and prediction so future runs will be simple.\nWe can leverage the examples that are in thie ML Pipeline Generator as they give good examples and templates to follow. So first we clone the github repo.", "!git clone https://github.com/GoogleCloudPlatform/ml-pipeline-generator-python.git", "Then we copy the sklearn example to the current directory and go into this folder.", "!cp -r ml-pipeline-generator-python/examples/sklearn sklearn-demo\n\n%cd sklearn-demo", "We now modify the config.yaml.example file with out project id, bucket id and model name. Note the training and evaluation data files should be stored in your bucket already, unless you decided to handle that upload in your preprocessing function (like in this lab).", "%%writefile config.yaml\n# Copyright 2020 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Config file for ML Pipeline Generator.\n\nproject_id: [PROJECT ID]\nbucket_id: [BUCKET ID]\nregion: \"us-central1\"\nscale_tier: \"STANDARD_1\"\nruntime_version: \"1.15\"\npython_version: \"3.7\"\npackage_name: \"ml_pipeline_gen\"\nmachine_type_pred: \"mls1-c4-m2\"\n\ndata:\n schema:\n - \"age\"\n - \"workclass\"\n - \"education_num\"\n - \"marital_status\"\n - \"occupation\"\n - \"relationship\"\n - \"race\"\n - \"capital_gain\"\n - \"capital_loss\"\n - \"hours_per_week\"\n - \"native_country\"\n - \"income_bracket\"\n train: \"gs://[BUCKET ID]/[MODEL NAME]/data/adult.data.csv\"\n evaluation: \"gs://[BUCKET ID]/[MODEL NAME]/data/adult.test.csv\"\n prediction:\n input_data_paths:\n - \"gs://[BUCKET ID]/[MODEL NAME]/inputs/*\"\n input_format: \"JSON\"\n output_format: \"JSON\"\n\nmodel:\n # Name must start with a letter and only contain letters, numbers, and\n # underscores.\n name: [MODEL NAME]\n path: \"model.sklearn_model\"\n target: \"income_bracket\"\n\nmodel_params:\n input_args:\n C:\n type: \"float\"\n help: \"Regularization parameter, must be positive.\"\n default: 1.0\n # Relative path.\n hyperparam_config: \"hptuning_config.yaml\"\n", "We now copy our previous preoprocessing code into the file concensus_preprocess.py. Run the hidden cell below.", "#@title\n%%writefile model/census_preprocess.py\n# python3\n# Copyright 2019 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Train a simple TF classifier for MNIST dataset.\n\nThis example comes from the cloudml-samples keras demo.\ngithub.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/tf-keras\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nfrom six.moves import urllib\nimport tempfile\n\nimport numpy as np\nimport pandas as pd\nimport tensorflow.compat.v1 as tf\n\n\nDATA_DIR = os.path.join(tempfile.gettempdir(), \"census_data\")\nDATA_URL = (\"https://storage.googleapis.com/cloud-samples-data/ai-platform\"\n + \"/census/data/\")\nTRAINING_FILE = \"adult.data.csv\"\nEVAL_FILE = \"adult.test.csv\"\nTRAINING_URL = os.path.join(DATA_URL, TRAINING_FILE)\nEVAL_URL = os.path.join(DATA_URL, EVAL_FILE)\n\n_CSV_COLUMNS = [\n \"age\", \"workclass\", \"fnlwgt\", \"education\", \"education_num\",\n \"marital_status\", \"occupation\", \"relationship\", \"race\", \"gender\",\n \"capital_gain\", \"capital_loss\", \"hours_per_week\", \"native_country\",\n \"income_bracket\",\n]\n_LABEL_COLUMN = \"income_bracket\"\nUNUSED_COLUMNS = [\"fnlwgt\", \"education\", \"gender\"]\n\n_CATEGORICAL_TYPES = {\n \"workclass\": pd.api.types.CategoricalDtype(categories=[\n \"Federal-gov\", \"Local-gov\", \"Never-worked\", \"Private\", \"Self-emp-inc\",\n \"Self-emp-not-inc\", \"State-gov\", \"Without-pay\"\n ]),\n \"marital_status\": pd.api.types.CategoricalDtype(categories=[\n \"Divorced\", \"Married-AF-spouse\", \"Married-civ-spouse\",\n \"Married-spouse-absent\", \"Never-married\", \"Separated\", \"Widowed\"\n ]),\n \"occupation\": pd.api.types.CategoricalDtype([\n \"Adm-clerical\", \"Armed-Forces\", \"Craft-repair\", \"Exec-managerial\",\n \"Farming-fishing\", \"Handlers-cleaners\", \"Machine-op-inspct\",\n \"Other-service\", \"Priv-house-serv\", \"Prof-specialty\", \"Protective-serv\",\n \"Sales\", \"Tech-support\", \"Transport-moving\"\n ]),\n \"relationship\": pd.api.types.CategoricalDtype(categories=[\n \"Husband\", \"Not-in-family\", \"Other-relative\", \"Own-child\", \"Unmarried\",\n \"Wife\"\n ]),\n \"race\": pd.api.types.CategoricalDtype(categories=[\n \"Amer-Indian-Eskimo\", \"Asian-Pac-Islander\", \"Black\", \"Other\", \"White\"\n ]),\n \"native_country\": pd.api.types.CategoricalDtype(categories=[\n \"Cambodia\", \"Canada\", \"China\", \"Columbia\", \"Cuba\", \"Dominican-Republic\",\n \"Ecuador\", \"El-Salvador\", \"England\", \"France\", \"Germany\", \"Greece\",\n \"Guatemala\", \"Haiti\", \"Holand-Netherlands\", \"Honduras\", \"Hong\",\n \"Hungary\", \"India\", \"Iran\", \"Ireland\", \"Italy\", \"Jamaica\", \"Japan\",\n \"Laos\", \"Mexico\", \"Nicaragua\", \"Outlying-US(Guam-USVI-etc)\", \"Peru\",\n \"Philippines\", \"Poland\", \"Portugal\", \"Puerto-Rico\", \"Scotland\", \"South\",\n \"Taiwan\", \"Thailand\", \"Trinadad&Tobago\", \"United-States\", \"Vietnam\",\n \"Yugoslavia\"\n ]),\n \"income_bracket\": pd.api.types.CategoricalDtype(categories=[\n \"<=50K\", \">50K\"\n ])\n}\n\n\ndef _download_and_clean_file(filename, url):\n \"\"\"Downloads data from url, and makes changes to match the CSV format.\n\n The CSVs may use spaces after the comma delimters (non-standard) or include\n rows which do not represent well-formed examples. This function strips out\n some of these problems.\n\n Args:\n filename: filename to save url to\n url: URL of resource to download\n \"\"\"\n temp_file, _ = urllib.request.urlretrieve(url)\n with tf.io.gfile.GFile(temp_file, \"r\") as temp_file_object:\n with tf.io.gfile.GFile(filename, \"w\") as file_object:\n for line in temp_file_object:\n line = line.strip()\n line = line.replace(\", \", \",\")\n if not line or \",\" not in line:\n continue\n if line[-1] == \".\":\n line = line[:-1]\n line += \"\\n\"\n file_object.write(line)\n tf.io.gfile.remove(temp_file)\n\n\ndef download(data_dir):\n \"\"\"Downloads census data if it is not already present.\n\n Args:\n data_dir: directory where we will access/save the census data\n\n Returns:\n foo\n \"\"\"\n tf.io.gfile.makedirs(data_dir)\n\n training_file_path = os.path.join(data_dir, TRAINING_FILE)\n if not tf.io.gfile.exists(training_file_path):\n _download_and_clean_file(training_file_path, TRAINING_URL)\n\n eval_file_path = os.path.join(data_dir, EVAL_FILE)\n if not tf.io.gfile.exists(eval_file_path):\n _download_and_clean_file(eval_file_path, EVAL_URL)\n\n return training_file_path, eval_file_path\n\n\ndef upload(train_df, eval_df, train_path, eval_path):\n train_df.to_csv(os.path.join(os.path.dirname(train_path), TRAINING_FILE),\n index=False, header=False)\n eval_df.to_csv(os.path.join(os.path.dirname(eval_path), EVAL_FILE),\n index=False, header=False)\n\n\ndef preprocess(dataframe):\n \"\"\"Converts categorical features to numeric. Removes unused columns.\n\n Args:\n dataframe: Pandas dataframe with raw data\n\n Returns:\n Dataframe with preprocessed data\n \"\"\"\n dataframe = dataframe.drop(columns=UNUSED_COLUMNS)\n\n # Convert integer valued (numeric) columns to floating point\n numeric_columns = dataframe.select_dtypes([\"int64\"]).columns\n dataframe[numeric_columns] = dataframe[numeric_columns].astype(\"float32\")\n\n # Convert categorical columns to numeric\n cat_columns = dataframe.select_dtypes([\"object\"]).columns\n dataframe[cat_columns] = dataframe[cat_columns].apply(\n lambda x: x.astype(_CATEGORICAL_TYPES[x.name]))\n dataframe[cat_columns] = dataframe[cat_columns].apply(\n lambda x: x.cat.codes)\n return dataframe\n\n\ndef standardize(dataframe):\n \"\"\"Scales numerical columns using their means and standard deviation.\n\n Args:\n dataframe: Pandas dataframe\n\n Returns:\n Input dataframe with the numerical columns scaled to z-scores\n \"\"\"\n dtypes = list(zip(dataframe.dtypes.index, map(str, dataframe.dtypes)))\n for column, dtype in dtypes:\n if dtype == \"float32\":\n dataframe[column] -= dataframe[column].mean()\n dataframe[column] /= dataframe[column].std()\n return dataframe\n\n\ndef load_data(train_path=\"\", eval_path=\"\"):\n \"\"\"Loads data into preprocessed (train_x, train_y, eval_y, eval_y) dataframes.\n\n Args:\n train_path: Local or GCS path to uploaded train data to.\n eval_path: Local or GCS path to uploaded eval data to.\n\n Returns:\n A tuple (train_x, train_y, eval_x, eval_y), where train_x and eval_x are\n Pandas dataframes with features for training and train_y and eval_y are\n numpy arrays with the corresponding labels.\n \"\"\"\n # Download Census dataset: Training and eval csv files.\n training_file_path, eval_file_path = download(DATA_DIR)\n\n train_df = pd.read_csv(\n training_file_path, names=_CSV_COLUMNS, na_values=\"?\")\n eval_df = pd.read_csv(eval_file_path, names=_CSV_COLUMNS, na_values=\"?\")\n\n train_df = preprocess(train_df)\n eval_df = preprocess(eval_df)\n\n # Split train and eval data with labels. The pop method copies and removes\n # the label column from the dataframe.\n train_x, train_y = train_df, train_df.pop(_LABEL_COLUMN)\n eval_x, eval_y = eval_df, eval_df.pop(_LABEL_COLUMN)\n\n # Join train_x and eval_x to normalize on overall means and standard\n # deviations. Then separate them again.\n all_x = pd.concat([train_x, eval_x], keys=[\"train\", \"eval\"])\n all_x = standardize(all_x)\n train_x, eval_x = all_x.xs(\"train\"), all_x.xs(\"eval\")\n\n # Rejoin features and labels and upload to GCS.\n if train_path and eval_path:\n train_df = train_x.copy()\n train_df[_LABEL_COLUMN] = train_y\n eval_df = eval_x.copy()\n eval_df[_LABEL_COLUMN] = eval_y\n upload(train_df, eval_df, train_path, eval_path)\n\n # Reshape label columns for use with tf.data.Dataset\n train_y = np.asarray(train_y).astype(\"float32\").reshape((-1, 1))\n eval_y = np.asarray(eval_y).astype(\"float32\").reshape((-1, 1))\n\n return train_x, train_y, eval_x, eval_y\n\n", "We perform a similar copy and paste into the sklearn_model.py file, with the addition of a parameter C which we will use for hyperparameter tuning. You can add as much hyperparameters as you requre to tune.", "%%writefile model/sklearn_model.py\n# python3\n# Copyright 2019 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Train a simple SVM classifier.\"\"\"\n\nimport argparse\nimport numpy as np\nfrom sklearn import svm\n\nfrom model.census_preprocess import load_data\n\n\ndef get_model(params):\n \"\"\"Trains a classifier.\"\"\"\n classifier = svm.SVC(C=params.C)\n return classifier", "We now speify the hyperparameters for our training runs based on the hyperparameter tuning yaml format for CAIP.", "%%writefile hptuning_config.yaml\n# Copyright 2020 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\ntrainingInput:\n scaleTier: STANDARD_1\n hyperparameters:\n goal: MAXIMIZE\n maxTrials: 2\n maxParallelTrials: 2\n hyperparameterMetricTag: score\n enableTrialEarlyStopping: TRUE\n params:\n - parameterName: C\n type: DOUBLE\n minValue: .001\n maxValue: 10\n scaleType: UNIT_LOG_SCALE\n", "Run the Sklearn Model on CAIP\nWe only modified two yaml files and the demo.py file to specify training, hyperparameter tuning and model prediction. Then, we simply copied and pasted our existing code for preprocessing and building the model. We did not have to write any GCP specific code as yet, this will all be handled by this solution. Now we can submit our jobs to the cloud with a few commands", "from ml_pipeline_gen.models import SklearnModel\nfrom model.census_preprocess import load_data", "Specify the path of your config.yaml file", "config = \"config.yaml\"", "Now, we can easily create our model, generate all the necessary Cloud AI Platform files needed to train the model, upload the data files and train the model in 4 simple commands. Note, our load_data function uploads the files for us automatically, you can also manually upload the files to the buckets you specified in the config.yaml file.", "model = SklearnModel(config)\nmodel.generate_files()\n\n# this fn is from out preprocessing file and\n# automatically uploads our data to GCS\nload_data(model.data[\"train\"], model.data[\"evaluation\"])\n\njob_id = model.train(tune=True)", "After training, we would like to test our model's prediction. First, deploy the model (our code automatically returns a generated version). Then request online predictions.", "pred_input = [\n [0.02599666, 6, 1.1365801, 4, 0, 1, 4, 0.14693314, -0.21713187,\n -0.034039237, 38],\n]\nversion = model.deploy(job_id=job_id)\npreds = model.online_predict(pred_input, version=version)\n\nprint(\"Features: {}\".format(pred_input))\nprint(\"Predictions: {}\".format(preds))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
danellecline/stoqs
stoqs/contrib/notebooks/classify_data.ipynb
gpl-3.0
[ "Classify Data\nCreate a classifier for different kinds of plankton using supervised machine learning \nExecuting this Notebook requires a personal STOQS database. Follow the steps to build your own development system &mdash; this will take about an hour and depends on a good connection to the Internet. Once your server is up log into it (after a cd ~/Vagrants/stoqsvm) and activate your virtual environment with the usual commands:\nvagrant ssh -- -X\ncd /vagrant/dev/stoqsgit\nsource venv-stoqs/bin/activate\n\nThen load the stoqs_september2013 database with the commands:\ncd stoqs\nln -s mbari_campaigns.py campaigns.py\nexport DATABASE_URL=postgis://stoqsadm:CHANGEME@127.0.0.1:5432/stoqs\nloaders/load.py --db stoqs_september2013\nloaders/load.py --db stoqs_september2013 --updateprovenance\n\nLoading this database can take over a day as there are over 40 million measurments from 22 different platforms. You may want to edit the stoqs/loaders/CANON/loadCANON_september2013.py file and comment all but the loadDorado() method calls at the end of the file. You can also set a stride value or use the --test option to create a stoqs_september2013_t database, in which case you'll need to set the STOQS_CAMPAIGNS envrironment variable: \nexport STOQS_CAMPAIGNS=stoqs_september2013_t\n\nUse the stoqs/contrib/analysis/classify.py script to create some labeled data that we will learn from:\ncontrib/analysis/classify.py --createLabels --groupName Plankton \\\n --database stoqs_september2013 --platform dorado \\\n --start 20130916T124035 --end 20130919T233905 \\\n --inputs bbp700 fl700_uncorr --discriminator salinity \\\n --labels diatom dino1 dino2 sediment \\\n --mins 33.33 33.65 33.70 33.75 --maxes 33.65 33.70 33.75 33.93 --clobber -v\n\nA little explanation is probably warranted here. The Dorado missions on 16-19 September 2013 sampled distinct water types in Monterey Bay that are easily identified by ranges of salinity. These water types contain different kinds of particles as identified by bbp700 (backscatter) and fl700_uncorr (chlorophyll). The previous command \"labeled\" MeasuredParameters in the database according to our understanding of the optical properties of diatoms, dinoflagellates, and sediment. This works for this data set because of the particular oceanographic conditions at the time.\nThis Notebook demonstrates creating a classification algortithm from these labeled data and addresses Issue 227 on GitHub. To be able to execute the cells and experiment with different algortithms and parameters launch Jupyter Notebook with:\ncd contrib/notebooks\n../../manage.py shell_plus --notebook\n\nnavigate to this file and open it. You will then be able to execute the cells and experiment with different settings and code.\n\nUse code from the classify module to read data from the database:", "from contrib.analysis.classify import Classifier\nc = Classifier()", "Build up command-line parameters so that we can call methods on our Classifier() object c", "from argparse import Namespace\nns = Namespace()\nns.database = 'stoqs_september2013_t'\nns.classifier='Decision_Tree'\nns.inputs=['bbp700', 'fl700_uncorr']\nns.labels=['diatom', 'dino1', 'dino2', 'sediment']\nns.test_size=0.4\nns.train_size=0.4\nns.verbose=True\nc.args = ns", "Load the labeled data, normalize, and and split into train and test sets (borrowing from classify.py's createClassifier() method)", "from sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import train_test_split\nX, y = c.loadLabeledData('Labeled Plankton', classes=('diatom', 'sediment'))\nX = StandardScaler().fit_transform(X)\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=c.args.test_size, train_size=c.args.train_size)", "Setup plotting", "%pylab inline\nimport pylab as plt\nfrom matplotlib.colors import ListedColormap\nplt.rcParams['figure.figsize'] = (27, 3)", "Plot classifier comparisons as in http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html", "for i, (name, clf) in enumerate(c.classifiers.items()):\n x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\n y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\n xx, yy = np.meshgrid(np.arange(x_min, x_max, .02),\n np.arange(y_min, y_max, .02))\n \n cm = plt.cm.RdBu\n cm_bright = ListedColormap(['#FF0000', '#0000FF'])\n ax = plt.subplot(1, len(c.classifiers) + 1, i + 1)\n\n clf.fit(X_train, y_train)\n score = clf.score(X_test, y_test)\n\n # Plot the decision boundary. For that, we will assign a color to each\n # point in the mesh [x_min, m_max]x[y_min, y_max].\n if hasattr(clf, \"decision_function\"):\n Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])\n else:\n Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]\n\n # Put the result into a color plot\n Z = Z.reshape(xx.shape)\n ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)\n\n # Plot also the training points\n ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)\n # and testing points\n ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,\n alpha=0.6)\n\n ax.set_xlim(xx.min(), xx.max())\n ax.set_ylim(yy.min(), yy.max())\n ax.set_xticks(())\n ax.set_yticks(())\n ax.set_title(name)\n ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),\n size=15, horizontalalignment='right')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
diging/tethne-notebooks
Feature Co-Occurrence.ipynb
gpl-3.0
[ "from tethne.readers import wos\ndatapath = '/Users/erickpeirson/Downloads/datasets/wos/genecol* OR common garden 1-500.txt'\ncorpus = wos.read(datapath)", "Networks of features based on co-occurrence\nThe features module in the tethne.networks subpackage contains a few functions for generating networks of features based on co-occurrence.", "from tethne.networks import features", "We can use index_feature() to tokenize the abstract into individual words.", "corpus.index_feature('abstract', tokenize=lambda x: x.split(' '))", "Here are all of the papers whose abstracts contain the word 'arthropod':", "abstractTerms = corpus.features['abstract']\n\nabstractTerms.papers_containing('arthropod')", "The transform method allows us to transform the values from one featureset using a custom function. One popular transformation for wordcount data is the term frequency * inverse document frequency (tf*idf) transformation. tf*idf weights wordcounts for each document based on how frequent each word is in the rest of the corpus, and is supposed to bring to the foreground the words that are the most \"important\" for each document.", "from math import log\ndef tfidf(f, c, C, DC):\n \"\"\"\n Apply the term frequency * inverse document frequency transformation.\n \"\"\"\n tf = float(c)\n idf = log(float(len(abstractTerms.features))/float(DC))\n return tf*idf\n\ncorpus.features['abstracts_tfidf'] = abstractTerms.transform(tfidf)", "I can specify some other transformation by first defining a transformer function, and then passing it as an argument to transform. A transformer function should accept the following parameters, and return a single numerical value (int or float).\n| Parameter | Description |\n| --------- | ----------------------------------------------------------------- |\n| f | Representation of the feature (e.g. string). |\n| v | Value of the feature in the document (e.g. frequency). |\n| C | Value of the feature in the Corpus (e.g. global frequency). |\n| DC | Number of documents in which the feature occcurs. |\nFor example:", "def mytransformer(s, c, C, DC):\n \"\"\"\n Doubles the feature value and divides by the overall value in the Corpus.\n \"\"\"\n return c*2./(C)", "We can then pass transformer function to transform as the first positional argument.", "corpus.features['abstracts_transformed'] = abstractTerms.transform(mytransformer)", "Here is the impact on the value for 'arthropod' in one document, using the two transformations above.", "print 'Before: '.ljust(15), corpus.features['abstract'].features['WOS:000324532900018'].value('arthropod')\nprint 'TF*IDF: '.ljust(15), corpus.features['abstracts_tfidf'].features['WOS:000324532900018'].value('arthropod')\nprint 'mytransformer: '.ljust(15), corpus.features['abstracts_transformed'].features['WOS:000324532900018'].value('arthropod')", "We can also use transform() to remove words from our FeatureSet. For example, we can apply the NLTK stoplist and remove too-common or too-rare words:", "from nltk.corpus import stopwords\nstoplist = stopwords.words()\n\ndef apply_stoplist(f, v, c, dc):\n if f in stoplist or dc > 50 or dc < 3:\n return 0\n return v\n\ncorpus.features['abstracts_filtered'] = corpus.features['abstracts_tfidf'].transform(apply_stoplist)\n\nprint 'Before: '.ljust(10), len(corpus.features['abstracts_tfidf'].index)\nprint 'After: '.ljust(10), len(corpus.features['abstracts_filtered'].index)", "The mutual_information function in the features module generates a network based on the pointwise mutual information of each pair of features in a featureset.\nThe first argument is a list of Papers, just like most other network-building functions. The second argument is the featureset that we wish to use.", "MI_graph = features.mutual_information(corpus, 'abstracts_filtered', min_weight=0.7)", "Take a look at the ratio of nodes to edges to get a sense of how to tune the min_weight parameter. If you have an extremely high number of edges for the number of nodes, then you should probably increase min_weight to obtain a more legible network. Depending on your field, you may have some guidance from theory as well.", "print 'This graph has {0} nodes and {1} edges'.format(MI_graph.order(), MI_graph.size())", "Once again, we'll use the GraphML writer to generate a visualizable network file.", "from tethne.writers import graph\n\nmi_outpath = '/Users/erickpeirson/Projects/tethne-notebooks/output/mi_graph.graphml'\n\ngraph.to_graphml(MI_graph, mi_outpath)", "" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
amniskin/amniskin.github.io
assets/notebooks/2017/10/07/.ipynb_checkpoints/active_portfolio_management_slides-checkpoint.ipynb
mit
[ "Data Science in Finance\nLousy models are great!\nWe often hear that in the world of hedge funds and seeking alpha (a term we'll go over in a bit), extremely poor models are used and hailed as great achievements. A model with an $R^2$ of 0.1 is great! A model with an $R^2$ of 0.3 is unheard of.\nIs it because the data scientists working in the field are not as good as the Physicists working at the LHC, or the engineers working on Google's search prediction algorithms?\nThe answer might surprise you!\nHungry for coin flips?\nLet's imagine that you happen to have some inside source at the mint who told you that a common quarter is actually not a fair coin. This information is only known to you and your friend. Let's pretend like the probability of getting \"heads\" is actually 0.55. So not anything you'd expect to notice on a short scale, but enough to where if you bet on coin flips enough, you might actually be able to make lots of money.\nWould you bet on those coin flips? Would you consider yourself very lucky for having such privileged information?\nWhat's your $R^2$?\nOur model:\n$$\n\\begin{align}\nY =& \\begin{cases}\n1 & \\text{ if \"heads\"} \\\n0 & \\text{ if \"tails\"}\n\\end{cases} \\\nP(Y=1) =& w \\\n\\hat Y \\equiv& 1\n\\end{align}\n$$\nFirst we need to figure out what our model is! It's not entirely clear that we're using a predictive model, but we are. Our model happens to be very simple: always pick \"heads\".\nFormally, we define a random variable $Y$ such that $Y=0$ if the coin flip results in \"tails\" and $Y=1$ if the coin flip results in \"heads\". Our model is very simple: it takes no input data (so no features), and always returns 1.\nSo let's calculate our $R^2$ value. To do this, we should first calculate SSE and SST. Let $w$ be the probability of getting \"heads\" (just for generality). In our particular case, $w=0.55$.\nNote that the mean we use for this the commonly excepted mean (not the mean your model predicts)!\n$$\n\\begin{align}\n\\text{SSE} =& \\sum\\limits_{i=0}^{n-1}\\left(y_i - \\hat y_i\\right)^2 & \\text{SST} =& \\sum\\limits_{i=0}^{n-1}\\left(y_i - \\bar y\\right)^2 \\\n=& \\sum\\limits_{i=0}^{n-1}\\left(y_i - 1\\right)^2 & =& \\sum\\limits_{i=0}^{n-1}\\left(y_i - 0.5\\right)^2 \\\n=& \\sum\\limits_{i=0}^{n-1}\\left(y_i^2 -2y_i + 1^2\\right) & =& \\sum\\limits_{i=0}^{n-1}\\left(y_i^2 - 2(0.5)y_i + 0.5^2\\right) \\\n=& \\sum\\limits_{i=0}^{n-1}\\left(-y_i + 1\\right) & =& \\sum\\limits_{i=0}^{n-1}\\left(0.25\\right) \\\n=& -nw + n & =& 0.5n \\\n=& n(1-w) & =& 0.5n \\\n\\end{align}\n$$\nSo, our $R^2$ is:\n$$\n\\begin{align}\nR^2 =& 1 - \\frac{\\text{SSE}}{\\text{SST}} \\\n=& 1 - \\frac{n(1-w)}{0.5n} \\\n=& 2w - 1 = 1.1 - 1 = 0.1\n\\end{align}\n$$\nSo we can see that one reason models with such low predictive power succeed so well in finance: it's trade-off between quality and quantity. It's also true that financial data is extremely noisy and there is very little stationarity due to an ever changing landscape of laws and company leaderships, etc.\nNot necessarily bad Data Scientists\nSo what are the bets we're making?\nNot just making money\nFinding which stocks will go up is pretty much a solved problem. Most \"secure\" stocks will rise in price on a long enough time-line. But just because the total price of stocks in your account has risen doesn't mean the value has risen. You have to account for the value of money (which is constantly dropping -- inflation).\nTo illustrate this: we could make money by investing in General Electric in 2010 and holding our stock.", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom pandas_datareader import DataReader\n\nreader = DataReader([\"AAPL\", \"SPY\", \"GOOG\", \"GE\"], data_source=\"yahoo\")\n\nfig = plt.figure()\nax = reader[\"Adj Close\", :, \"GE\"].plot(label=\"GE\")\nax.legend()\nax.set_title(\"Stock Adjusted Closing Price\")\nplt.savefig(\"img/close_price_GE.png\")\n\ntmp = reader[\"Adj Close\", :, \"GE\"]\na,b = tmp.iloc[0], tmp.iloc[-1]\nprint(a,b)\n\nc = (a - b) / b\nprint(c)\n\nprint((1+c)**(1.0/11) - 1)", "If we'd done that, we would have seen on average about 6% return per year! That's over the average inflation of somewhere between 3-5 percent, so we're looking pretty good, right?\nLooking good?\nWell, yes and no. On the one hand, we did make money (at the expense of some risk, of course). But what if we'd chosen a better company to invest in like Apple? Or what if we'd invested instead in an index fund like Spyder?", "fig = plt.figure()\nax = reader[\"Adj Close\", :, \"SPY\"].plot(label=\"SPY\")\nax = reader[\"Adj Close\", :, \"AAPL\"].plot(label=\"AAPL\", ax=ax)\nax = reader[\"Adj Close\", :, \"GE\"].plot(label=\"GE\", ax=ax)\nax.legend()\nax.set_title(\"Stock Adjusted Closing Price\")\nplt.savefig(\"img/close_price_3.png\")", "Better yet!\nWe can continue this thought process ad infinitum. For instance, we could've invested in Google. Or done something even crazier (a plot I won't show for simplicity reasons) -- volatility trading.", "fig = plt.figure()\nax = reader[\"Adj Close\", :, \"SPY\"].plot(label=\"SPY\")\nax = reader[\"Adj Close\", :, \"AAPL\"].plot(label=\"AAPL\", ax=ax)\nax = reader[\"Adj Close\", :, \"GOOG\"].plot(label=\"GOOG\", ax=ax)\nax = reader[\"Adj Close\", :, \"GE\"].plot(label=\"GE\", ax=ax)\nax.legend()\nax.set_title(\"Stock Adjusted Closing Price\")\nplt.savefig(\"img/close_price_all_4.png\")", "So what's the game?\nActive Portfolio Management\nRichard C. Grinold, Ronald N. Kahn\nThe Hedge Fund Mission\nMake money\nLike Roulette\n\nA paraphrasing\nThe CAPM\nCapital Asset Pricing Model\nRisk\nWhat is it?\nVariance?\nExceptional Returns" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mikekestemont/ghent1516
Chapter 6 - Regular Expressions.ipynb
mit
[ "Chapter 6 - Regular Expressions\nRegular expressions are another, powerful way of handling strings in Python. You can use them for all sorts of very common string operations, like searching words and replacing them in texts. In fact, regular expressions are often used across many programming languages and text editors. Because of that, you will often be able to reuse many of the things that you will learn about below. The functionality for using regular expressions in Python is included in the 're' package, which you should be able to import as usual:", "import re", "On many occasions, you will want to search a string in your scripts: e.g. does the following word appear in a text? Is the format of the following email address valid and does it contain an @-symbol and a least one dot? To carry out such operations, the first thing you need is a string to search:", "s = \"In principio erat verbum, et verbum erat apud Deum.\"", "The next thing we define is the actual regular expression which we will use, or the string that we will use to search the sentence we defined above. We pass this string to the compile() function in the re package, which will allow fast searching later on. Note that we put an r in front of this string when we initialize it, which turns our string into a so-called 'raw string'. While this is not always necessary, it is a good idea to do this consistently when dealing with regular expressions.", "pattern = re.compile(r\"verbum\")", "Next, we can call the sub() function from the re package on this pattern, in order to replace (or 'substitute') our pattern with another word, like this:", "text = pattern.sub(\"XXX\", s)\nprint(text)", "Note the order of the arguments passed to sub(): first, the word we would like to replace our pattern with, and secondly our original string. We can just as easily get back our original string:", "pattern2 = re.compile(r\"XXX\")\ntext = pattern2.sub(\"verbum\", s)\nprint(text)", "So far nothing special: we are simply replacing one word for another word. The smart ones among you will have noticed that we could have achieved the exact same result using the replace() function, which we came across in an earlier chapter. But now: say you would like to replace all vowels in a string. With regular expressions, this is a piece of cake:", "vowel_pattern = re.compile(r\"a|e|o|u|i\")\nwithout_vowels = vowel_pattern.sub(\"X\", s)\nprint(without_vowels)", "Note how our pattern allows for a special syntax: the pipe symbol which we used allows to express that one character OR another one is fine for the regular expression to match. Oops: the capital letter at the beginning of the sentence hasn't been replaced because we only included lowercase vowels in our pattern definition. Let's add the uppercase vowels to the regex:", "vowel_pattern = re.compile(r\"a|A|e|E|o|O|u|U|i|I\")\nwithout_vowels = vowel_pattern.sub(\"X\", s)\nprint(without_vowels)", "There is in fact an easy way to match all lowercase and uppercase characters in a string, like this:", "ups = re.compile(r\"[A-Z]\")\nlows = re.compile(r\"[a-z]\")\nwithout_ups = ups.sub(\"X\", s)\nprint(without_ups)\nwithout_ups = lows.sub(\"X\", s)\nprint(without_ups)", "These specific patterns are called 'ranges': they will match any lowercase or uppercase letter. In fact, you can use such a range syntax using squared brackets, to replace the pipe syntax we used earlier.", "vowel_pattern = re.compile(r\"[aeoui]\")\nwithout_vowels = vowel_pattern.sub(\"X\", s)\nprint(without_vowels)", "You can also look for more specific, as well as longer letter groups by arranging them with round brackets:", "p = re.compile(r\"(ri)|(um)|(Th)\")\nprint(vowel_pattern.sub(\"X\", s))", "There is also a syntax to match any character (except the newline):", "any_char = re.compile(r\".\")\nprint(any_char.sub(\"X\", s))", "If you would like your expression to match an actual dot, you have to escape it using a backslash:", "dot = re.compile(r\"\\.\")\nprint(dot.sub(\"X\", s))", "By the way, there exist more characters that you might have to escape using a backslash. This is because they are part of the syntax that use to define regular expressions: if you don't escape them, Python will not know that you are looking for an literal match. Characters that you typically might want to escape include: ( + ? . * ^ $ ( ) [ ] { } | \\ ) ,. For example:", "s = \"In principio [erat] verbum, et verbum erat apud Deum.\"\nbrackets_wrong = re.compile(r\"[|]\")\nprint(brackets_wrong.sub(\"X\", s))\nbrackets_right = re.compile(r\"(\\[)|(\\])\")\nprint(brackets_right.sub(\"X\", s))", "The syntax for regular expression includes a whole range of possibilities which we simply cannot all deal with it here. Because of that we will stick to a number of helpful examples. An interesting feature is that you can specify whether or not a character really has to occur. You can check whether the pattern occurs in a string using the match() function which will return None if it doesn't find the pattern in the string searched:", "pattern = re.compile(r\"m{2,4}\")\nprint(pattern.match(\"\"))\nprint(pattern.match(\"m\"))\nprint(pattern.match(\"mm\"))\nprint(pattern.match(\"mmm\"))\nprint(pattern.match(\"mmmm\"))\nprint(pattern.match(\"mmmmm\"))\nprint(pattern.match(\"mmmmmm\"))\nprint(pattern.match(\"mmmmammm\"))", "With the curly brackets, you indicate that you are only interested in the letter 'm' if it occurs 2,3 or 4 times in a row in the string you search. Because None is returned if not a single match was found, you can use the outcome of match()in an if-statement. The following example shows how you can also use the curly brackets to match an exact number of occurences (in this case three a's).", "pattern = re.compile(r\"a{5}\")\nif pattern.match(\"aaaaa\"):\n print(\"Found it!\")\nelse:\n print(\"Nope...\")\n# or:\nif pattern.match(\"aa\"):\n print(\"Found it!\")\nelse:\n print(\"Nope...\")", "Using a plus sign you can indicate whether you want to match multiple occurrences of a character. A good example from the world of paper writing are double spaces, which can be hard to spot. In the code block below, we replace all multiple occurences of a whitespace character by a single whitespace character. Note that you can use the whitespace character just like any other character (you don't have to escape it). Multiple occurences of the whitespace character will be matched: it doesn't matter how many, as long as there is at least one:", "paper = \"My thesis on biology contains a lot of double spaces. I will remove them.\"\nmult = re.compile(r\" +\")\nprint(mult.sub(\" \", paper))", "A similar piece of functionality is offered by the asterisk operator: here you can match multiple occurences of the same character in a row OR not a single one. Note the subtle difference with respect to the plus operator, which needs at least a single occurence of the character to match. Here we use the search() function which will search the entire string: the match() function which we used earlier will only look for matches at the very beginning of a string. Keep this in mind! The final pattern below yields a match, although there is not a single 'x' in the sentence. That is because the pattern with the asterisk says: \"a single x, or no x at all\".", "s = \"In English some letters occur multiple times in a row.\"\np1 = re.compile(r\"t\")\np2 = re.compile(r\"t*\")\np3 = re.compile(r\"x\")\np4 = re.compile(r\"x*\")\nprint(p1.search(s))\nprint(p2.search(s))\nprint(p3.search(s))\nprint(p4.search(s))", "Interestingly, you also use regular expression to search inside words. Can you explain why the following patterns (don't) match?", "candidates = [\"good\", \"god\", \"gud\", \"gd\"]\np = re.compile(r\"go+d\")\nfor c in candidates:\n print(p.match(c))", "Speaking of words: it might be interesting to know that you can use regular expressions for advanced string splitting. If you want to split a sentence across all whitespace characters for instance, you can use an espaced \\s. This operator will match all whitespace characters, such as tabs, linebreaks, normal spaces etc. If you add a + sign, your pattern will match series of whitespace characters:", "s = \"\"\"This is a text on three lines\nwith multiple instances of \ndouble spaces.\"\"\"\nwhitespace = re.compile(r\"\\s+\")\nprint(whitespace.split(s))", "If you would have wanted to split on the linebreaks only (possible followed by e.g. spaces), you could have used the following pattern:", "s = \"\"\"This is a text on three lines\nwith multiple instances of \ndouble spaces.\"\"\"\nwhitespace = re.compile(r\"\\s*\\n\\s*\")\nprint(whitespace.split(s))", "If we want to correct the double spaces, we could now do:", "ds = re.compile(r\" +\")\nfor line in whitespace.split(s):\n print ds.sub(\" \", line)", "One final feature we should mention is the [^...] syntax: this will match any character that is NOT between the brackets. Remember the vowel_pattern above? Using the caret symbol we can quickly 'invert' this pattern, so that it will match all consonants:", "s = \"these are vowels and consonants\"\nconsonants = re.compile(r\"[^aeuoi]\")\nprint(consonants.sub(\"X\", s))", "Regular expressions are really useful, but they can get tricky as well as difficult to read, because of the many different options that exist. There is a whole range of special symbols which you can use to match nearly everything in a text, from word boundaries (\\b) to digits (\\d) etc. Don't learn these by heart but look up a good reference list online (like http://www.tutorialspoint.com/python/python_reg_expressions.htm). As usual Stackoverflow will prove really useful when you search for information online.\nFinal Exercises Chapter 6\n\nEx. 1 - Write Python code that loads data items from a file that has the format below. Use regular expresions to parse the lines and the data fields: take care of the multiple whitespace characters that might occur. Fill a dictionary using the two data fields. Use regular expressions as much as possible!\n\nExample data:\ncolor = red\nnumber =7\nname= joe\nage = 9\n...\n\nEx. 2 - In the scientific community you will often find data online that has been stored in '.csv' format. Each data item in these files is represented on separate line. Write a function that takes a csv-filename as only input parameter and return a lists of lists, containing the data fields for each item.\n\nExample data:\nMike, 28, scientist, Belgium\nLars, 49, research director, Luxemburg\nMatt, 52, rockstar, US\nExample output:\n[[\"Mike\",\"28\",\"scientist\",\"Belgium\"],[\"Lars\",\"49\",\"research director\",\"Luxemburg\"], ...]\n\nEx. 3 - Expand the previous excercise (don't throw away the original version!). Assume that the first line of your csv-file is not a real data-entry, but a so-called header-line that contains the names of the data fields stored in your csv-file. Now, have your function return a list of dictionaries: one for data item, containing for each item the value for each data field which you find.\n\nExample data:\nname, age, profession, country\nMike, 28, scientist, Belgium\nLars, 49, research director, Luxemburg\nMatt, 52, rockstar, US\n...\nExample output:\n[{\"name\": \"Mike\", \"age\": \"28\", \"profession\":\"scientist\", \"country\":\"Belgium\"}, {\"name\": \"Lars\", \"age\": \"49\", \"profession\":\"research director\", \"country\":\"Luxemburg\"]}, ...]\n\n\nEx. 4 - Write a function that reads a random text file, splits the words across whitespace instances and returns a set containing all words that contain at least two characters. Use regular expressions where possible!\n\n\nEx. 5 - Come up with a regular expression that matches time-of-day strings (such as 9:14 am or 11:20 pm).\n\n\nEx. 6 - Write a function that can validate email addresses: a valid email address contains at least one dot, one (and only one!) at-symbol. It should not contain other punctuation symbols and it should end in a common extension like \".com\", \".net\" or \".org\". Again, use regular expressions where possible! \n\n\n\nYou've reached the end of Chapter 6! Ignore the code below, it's only here to make the page pretty:", "from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jjonte/udacity-deeplearning-nd
py3/project-1/dlnd-your-first-neural-network.ipynb
unlicense
[ "Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()", "Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "rides[:24*10].plot(x='dteday', y='cnt')", "Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().", "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "Splitting the data into training, testing, and validation sets\nWe'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "# Save the last 21 days \ntest_data = data[-21*24:]\ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "# Hold out the last 60 days of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.", "class NeuralNetwork(object):\n \n @staticmethod\n def sigmoid(x):\n return 1 / (1 + np.exp(-x)) \n \n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,\n (self.hidden_nodes, self.input_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,\n (self.output_nodes, self.hidden_nodes))\n self.lr = learning_rate\n\n self.activation_function = NeuralNetwork.sigmoid\n\n def train(self, inputs_list, targets_list):\n # Convert inputs list to 2d array\n inputs = np.array(inputs_list, ndmin=2).T\n targets = np.array(targets_list, ndmin=2).T\n\n ### Forward pass ###\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)\n hidden_outputs = self.activation_function(hidden_inputs)\n\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)\n final_outputs = final_inputs\n\n ### Backward pass ###\n output_errors = targets - final_outputs\n\n hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)\n hidden_grad = hidden_outputs * (1 - hidden_outputs)\n\n self.weights_hidden_to_output += self.lr * (output_errors * hidden_outputs).T\n self.weights_input_to_hidden += self.lr * np.dot((hidden_errors * hidden_grad), inputs.T)\n\n def run(self, inputs_list):\n inputs = np.array(inputs_list, ndmin=2).T\n\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)\n hidden_outputs = self.activation_function(hidden_inputs)\n\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)\n final_outputs = final_inputs\n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of epochs\nThis is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.", "import sys\n\n### Set the hyperparameters here ###\nepochs = 1000\nlearning_rate = 0.1\nhidden_nodes = 10\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor e in range(epochs):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n for record, target in zip(train_features.ix[batch].values, \n train_targets.ix[batch]['cnt']):\n network.train(record, target)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features), train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features), val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: \" + str(100 * e/float(epochs))[:4] \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\nplt.ylim(ymax=0.5)", "Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features)*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "Thinking about your results\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nIt does pretty well up until Dec 22, then the accuracy drops dramatically - it thinks the demand would be higher than it really is for the last 10 days of the year. I would guess the impact of the Christmas holiday and the time people take time off work would cause this.\nUnit tests\nRun these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.", "import unittest\n\ninputs = [0.5, -0.2, 0.1]\ntargets = [0.4]\ntest_w_i_h = np.array([[0.1, 0.4, -0.3], \n [-0.2, 0.5, 0.2]])\ntest_w_h_o = np.array([[0.3, -0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328, -0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, 0.39775194, -0.29887597],\n [-0.20185996, 0.50074398, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mnschmit/LMU-Syntax-nat-rlicher-Sprachen
11-notebook-after-class.ipynb
apache-2.0
[ "Übungsblatt 11\nPräsenzaufgaben\nAufgabe 1 &nbsp;&nbsp;&nbsp; Grammatikinduktion\nIn dieser Aufgabe soll vollautomatisch aus Daten (Syntaxbäumen) eine probabilistische, kontextfreie Grammatik erzeugt werden.\nFüllen Sie die Lücken und versuchen Sie mithilfe Ihrer automatisch erstellten Grammatik die folgenden Sätze zu parsen:", "test_sentences = [\n \"the men saw a car .\",\n \"the woman gave the man a book .\",\n \"she gave a book to the man .\",\n \"yesterday , all my trouble seemed so far away .\"\n]\n\nimport nltk\nfrom nltk.corpus import treebank\nfrom nltk.grammar import ProbabilisticProduction, PCFG\n\n# Production count: the number of times a given production occurs\npcount = {}\n\n# LHS-count: counts the number of times a given lhs occurs\nlcount = {}\n\nfor tree in treebank.parsed_sents():\n for prod in tree.productions():\n pcount[prod] = pcount.get(prod, 0) + 1\n lcount[prod.lhs()] = lcount.get(prod.lhs(), 0) + 1\n \nproductions = [\n ProbabilisticProduction(\n p.lhs(), p.rhs(),\n prob=pcount[p] / lcount[p.lhs()]\n )\n for p in pcount\n]\n\nstart = nltk.Nonterminal('S')\ngrammar = PCFG(start, productions)\nparser = nltk.ViterbiParser(grammar)\n\nfrom IPython.display import display\n\nfor s in test_sentences:\n for t in parser.parse(s.split()):\n display(t)", "Aufgabe 2 &nbsp;&nbsp;&nbsp; Informationsextraktion per Syntaxanalyse\nGegenstand dieser Aufgabe ist eine anwendungsnahe Möglichkeit, Ergebnisse einer Syntaxanalyse weiterzuverarbeiten. Aus den syntaktischen Abhängigkeiten eines Textes soll (unter Zuhilfenahme einiger Normalisierungsschritte) eine semantische Repräsentation der im Text enthaltenen Informationen gewonnen werden.\nFür die syntaktische Analyse soll der DependencyParser der Stanford CoreNLP Suite verwendet werden. Die semantische Repräsentation eines Satzes sei ein zweistelliges, logisches Prädikat, dessen Argumente durch Subjekt und Objekt gefüllt sind. (Bei Fehlen eines der beiden Elemente soll None geschrieben werden.)\nFolgendes Beispiel illustriert das gewünschte Ergebnis:\nEingabe:\nI shot an elephant in my pajamas.\nThe elephant was seen by a giraffe in the desert.\nThe bird I need is a raven.\nThe man who saw the raven laughed out loud.\n\nAusgabe:\nshot(I, elephant)\nseen(giraffe, elephant)\nneed(I, bird)\nraven(bird, None)\nsaw(man, raven)\nlaughed(man, None)\n\nBeachten Sie, dass PATH_TO_CORE in folgender Code-Zelle Ihrem System entsprechend angepasst werden muss!", "from nltk.parse.stanford import StanfordDependencyParser\n\nPATH_TO_CORE = \"/pfad/zu/stanford-corenlp-full-2017-06-09\"\njar = PATH_TO_CORE + '/' + \"stanford-corenlp-3.8.0.jar\"\nmodel = PATH_TO_CORE + '/' + \"stanford-corenlp-3.8.0-models.jar\"\n\ndep_parser = StanfordDependencyParser(\n jar, model,\n model_path=\"edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz\"\n)\n\nfrom collections import defaultdict\n\ndef generate_predicates_for_sentence(sentence):\n verbs = set()\n sbj = {}\n obj = {}\n sbj_candidates = defaultdict(list)\n case = {}\n relcl_triples = []\n for result in dep_parser.raw_parse(sentence):\n for triple in result.triples():\n # print(*triple)\n \n if triple[1] == \"nsubj\":\n # whenever we find a subject, its head can be called verb\n # if something is added twice it does not matter --> sets\n # so it is better to add too often than not enough !\n # remember that nouns can be \"verbs\" in that sense together with copula\n verbs.add(triple[0])\n sbj[triple[0]] = triple[2]\n \n if triple[1] == \"dobj\" or triple[1] == \"nsubjpass\":\n # everything that has a direct object should be called a verb as well\n verbs.add(triple[0])\n obj[triple[0]] = triple[2]\n \n if triple[0][1].startswith('V'):\n # everything with a 'verb' as part of speech can be called a verb\n verbs.add(triple[0])\n if triple[1] == \"nmod\":\n sbj_candidates[triple[0]].append(triple[2])\n \n if triple[1] == \"case\":\n case[triple[0]] = triple[2][0]\n \n if triple[1] == \"acl:relcl\":\n relcl_triples.append(triple)\n \n for triple in relcl_triples:\n if triple[2] not in sbj or sbj[triple[2]][1] in [\"WP\", \"WDT\"]:\n sbj[triple[2]] = triple[0]\n else:\n obj[triple[2]] = triple[0]\n \n for v in verbs:\n if v not in sbj:\n if v in sbj_candidates:\n for cand in sbj_candidates[v]:\n if case[cand] == \"by\":\n sbj[v] = cand\n \n predicates = []\n for v in verbs:\n if v in sbj:\n subject = sbj[v]\n else:\n subject = (\"None\",)\n if v in obj:\n object = obj[v]\n else:\n object = (\"None\",)\n predicates.append(\n v[0] + \"(\" + subject[0] + \", \" + object[0] + \")\"\n )\n \n return predicates\n\nfor pred in generate_predicates_for_sentence(\n \"The man who saw the raven laughed out loud.\"\n):\n print(pred)\n\ndef generate_predicates_for_text(text):\n predicates = []\n for sent in nltk.tokenize.sent_tokenize(text):\n predicates.extend(generate_predicates_for_sentence(sent))\n return predicates\n\ntext = \"\"\"\nI shot an elephant in my pajamas.\nThe elephant was seen by a giraffe.\nThe bird I need is a raven.\nThe man who saw the raven laughed out loud.\n\"\"\"\n\nfor pred in generate_predicates_for_text(text):\n print(pred)", "Hausaufgaben\nAufgabe 3 &nbsp;&nbsp;&nbsp; Parent Annotation\nParent Annotation kann die Performanz einer CFG wesentlich verbessern. Schreiben Sie eine Funktion, die einen gegebenen Syntaxbaum dieser Optimierung unterzieht. Auf diese Art und Weise transformierte Bäume können dann wiederum zur Grammatikinduktion verwendet werden.\nparentHistory soll dabei die Anzahl der Vorgänger sein, die zusätzlich zum direkten Elternknoten berücksichtigt werden. (Kann bei der Lösung der Aufgabe auch ignoriert werden.)\nparentChar soll ein Trennzeichen sein, das bei den neuen Knotenlabels zwischen dem ursprünglichen Knotenlabel und der Liste von Vorgängern eingefügt wird.", "def parent_annotation(tree, parentHistory=0, parentChar=\"^\"):\n pass\n\ntest_tree = nltk.Tree(\n \"S\",\n [\n nltk.Tree(\"NP\", [\n nltk.Tree(\"DET\", []),\n nltk.Tree(\"N\", [])\n ]),\n nltk.Tree(\"VP\", [\n nltk.Tree(\"V\", []),\n nltk.Tree(\"NP\", [\n nltk.Tree(\"DET\", []),\n nltk.Tree(\"N\", [])\n ])\n ])\n ]\n)\n\nparent_annotation(\n test_tree\n)", "Aufgabe 4 &nbsp;&nbsp;&nbsp; Mehr Semantik für IE\nZusätzlich zu den in Aufgabe 2 behandelten Konstruktionen sollen jetzt auch negierte und komplexe Sätze mit Konjunktionen sinnvoll verarbeitet werden.\nEingabe:\nI see an elephant.\nYou didn't see the elephant.\nPeter saw the elephant and drank wine.\n\nGewünschte Ausgabe:\nsee(I, elephant)\nnot_see(You, elephant)\nsaw(Peter, elephant)\ndrank(Peter, wine)\n\nKopieren Sie am besten Ihren aktuellen Stand von oben herunter und fügen Sie Ihre Erweiterungen dann hier ein.", "def generate_predicates_for_sentence(sentence):\n pass\ndef generate_predicates_for_text(text):\n pass\n\ntext = \"\"\"\nI see an elephant.\nYou didn't see the elephant.\nPeter saw the elephant and drank wine.\n\"\"\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dostrebel/working_place_ds_17
06_Python_Rückblick/01+Rückblick+02+For-Loop-Übungen+-Copy1.ipynb
mit
[ "10 For-Loop-Rückblick-Übungen\nIn den Teilen der folgenden Übungen habe ich den Code mit \"XXX\" ausgewechselt. Es gilt in allen Übungen, den korrekten Code auszuführen und die Zelle dann auszuführen. \n1.Drucke alle diese Prim-Zahlen aus:", "primzweibissieben = [2, 3, 5, 7]\nfor prime in primzweibissieben:\n print(prime)", "2.Drucke alle die Zahlen von 0 bis 4 aus:", "for x in range(5):\n print(x)\n\nfor x in range(3, 6):\n print(x)\n", "4.Baue einen For-Loop, indem Du alle geraden Zahlen ausdruckst, die tiefer sind als 237.", "numbers = [\n 951, 402, 984, 651, 360, 69, 408, 319, 601, 485, 980, 507, 725, 547, 544,\n 615, 83, 165, 141, 501, 263, 617, 865, 575, 219, 390, 984, 592, 236, 105, 942, 941,\n 386, 462, 47, 418, 907, 344, 236, 375, 823, 566, 597, 978, 328, 615, 953, 345,\n 399, 162, 758, 219, 918, 237, 412, 566, 826, 248, 866, 950, 626, 949, 687, 217,\n 815, 67, 104, 58, 512, 24, 892, 894, 767, 553, 81, 379, 843, 831, 445, 742, 717,\n 958, 609, 842, 451, 688, 753, 854, 685, 93, 857, 440, 380, 126, 721, 328, 753, 470,\n 743, 527\n]\n\n# Hier kommt Dein Code:\nnew_lst = []\nfor elem in numbers:\n if elem < 238 and elem % 2 == 0:\n new_lst.append(elem)\n else:\n continue\nprint(new_lst)\n\n\n\n\n\n\n#Lösung:", "5.Addiere alle Zahlen in der Liste", "sum(numbers)\n\n#Lösung:", "6.Addiere nur die Zahlen, die gerade sind", "evennumber = []\nfor elem in numbers:\n if elem % 2 == 0:\n evennumber.append(elem)\nsum(evennumber)", "7.Drucke mit einem For Loop 5 Mal hintereinander Hello World aus", "Satz = ['Hello World', 'Hello World','Hello World','Hello World','Hello World']\nfor elem in Satz:\n print(elem)\n\n#Lösung", "8.Entwickle ein Programm, das alle Nummern zwischen 2000 und 3200 findet, die durch 7, aber nicht durch 5 teilbar sind. Das Ergebnis sollte auf einer Zeile ausgedruckt werden. Tipp: Schaue Dir hier die Vergleichsoperanden von Python an.", "l=[]\nfor i in range(2000, 3201):\n if (i % 7==0) and (i % 5!=0):\n l.append(str(i))\n\nprint(','.join(l))", "9.Schreibe einen For Loop, der die Nummern in der folgenden Liste von int in str verwandelt.", "lst = range(45,99)\n\nnewlst = []\nfor i in lst:\n i = str(i)\n newlst.append(i)\nprint(newlst)\n\n\n", "10.Schreibe nun ein Programm, das alle Ziffern 4 mit dem Buchstaben A ersetzte, alle Ziffern 5 mit dem Buchtaben B.", "newnewlist = []\nfor elem in newlst:\n if '4' in elem:\n elem = elem.replace('4', 'A')\n if '5' in elem:\n elem = elem.replace('5', 'B')\n newnewlist.append(elem)\n\nnewnewlist" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jseabold/statsmodels
examples/notebooks/mixed_lm_example.ipynb
bsd-3-clause
[ "Linear Mixed Effects Models", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport statsmodels.formula.api as smf\nfrom statsmodels.tools.sm_exceptions import ConvergenceWarning", "Note: The R code and the results in this notebook has been converted to markdown so that R is not required to build the documents. The R results in the notebook were computed using R 3.5.1 and lme4 1.1.\nipython\n%load_ext rpy2.ipython\nipython\n%R library(lme4)\narray(['lme4', 'Matrix', 'tools', 'stats', 'graphics', 'grDevices',\n 'utils', 'datasets', 'methods', 'base'], dtype='&lt;U9')\nComparing R lmer to statsmodels MixedLM\nThe statsmodels imputation of linear mixed models (MixedLM) closely follows the approach outlined in Lindstrom and Bates (JASA 1988). This is also the approach followed in the R package LME4. Other packages such as Stata, SAS, etc. should also be consistent with this approach, as the basic techniques in this area are mostly mature.\nHere we show how linear mixed models can be fit using the MixedLM procedure in statsmodels. Results from R (LME4) are included for comparison.\nHere are our import statements:\nGrowth curves of pigs\nThese are longitudinal data from a factorial experiment. The outcome variable is the weight of each pig, and the only predictor variable we will use here is \"time\". First we fit a model that expresses the mean weight as a linear function of time, with a random intercept for each pig. The model is specified using formulas. Since the random effects structure is not specified, the default random effects structure (a random intercept for each group) is automatically used.", "data = sm.datasets.get_rdataset('dietox', 'geepack').data\nmd = smf.mixedlm(\"Weight ~ Time\", data, groups=data[\"Pig\"])\nmdf = md.fit(method=[\"lbfgs\"])\nprint(mdf.summary())", "Here is the same model fit in R using LMER:\nipython\n%%R\ndata(dietox, package='geepack')\nipython\n%R print(summary(lmer('Weight ~ Time + (1|Pig)', data=dietox)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: Weight ~ Time + (1 | Pig)\n Data: dietox\nREML criterion at convergence: 4809.6\nScaled residuals: \n Min 1Q Median 3Q Max \n-4.7118 -0.5696 -0.0943 0.4877 4.7732 \nRandom effects:\n Groups Name Variance Std.Dev.\n Pig (Intercept) 40.39 6.356 \n Residual 11.37 3.371 \nNumber of obs: 861, groups: Pig, 72\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 15.72352 0.78805 19.95\nTime 6.94251 0.03339 207.94\nCorrelation of Fixed Effects:\n (Intr)\nTime -0.275\n```\nNote that in the statsmodels summary of results, the fixed effects and random effects parameter estimates are shown in a single table. The random effect for animal is labeled \"Intercept RE\" in the statsmodels output above. In the LME4 output, this effect is the pig intercept under the random effects section.\nThere has been a lot of debate about whether the standard errors for random effect variance and covariance parameters are useful. In LME4, these standard errors are not displayed, because the authors of the package believe they are not very informative. While there is good reason to question their utility, we elected to include the standard errors in the summary table, but do not show the corresponding Wald confidence intervals.\nNext we fit a model with two random effects for each animal: a random intercept, and a random slope (with respect to time). This means that each pig may have a different baseline weight, as well as growing at a different rate. The formula specifies that \"Time\" is a covariate with a random coefficient. By default, formulas always include an intercept (which could be suppressed here using \"0 + Time\" as the formula).", "md = smf.mixedlm(\"Weight ~ Time\", data, groups=data[\"Pig\"], re_formula=\"~Time\")\nmdf = md.fit(method=[\"lbfgs\"])\nprint(mdf.summary())", "Here is the same model fit using LMER in R:\nipython\n%R print(summary(lmer(\"Weight ~ Time + (1 + Time | Pig)\", data=dietox)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: Weight ~ Time + (1 + Time | Pig)\n Data: dietox\nREML criterion at convergence: 4434.1\nScaled residuals: \n Min 1Q Median 3Q Max \n-6.4286 -0.5529 -0.0416 0.4841 3.5624 \nRandom effects:\n Groups Name Variance Std.Dev. Corr\n Pig (Intercept) 19.493 4.415 \n Time 0.416 0.645 0.10\n Residual 6.038 2.457 \nNumber of obs: 861, groups: Pig, 72\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 15.73865 0.55012 28.61\nTime 6.93901 0.07982 86.93\nCorrelation of Fixed Effects:\n (Intr)\nTime 0.006 \n```\nThe random intercept and random slope are only weakly correlated $(0.294 / \\sqrt{19.493 * 0.416} \\approx 0.1)$. So next we fit a model in which the two random effects are constrained to be uncorrelated:", ".294 / (19.493 * .416)**.5\n\nmd = smf.mixedlm(\"Weight ~ Time\", data, groups=data[\"Pig\"],\n re_formula=\"~Time\")\nfree = sm.regression.mixed_linear_model.MixedLMParams.from_components(np.ones(2),\n np.eye(2))\n\nmdf = md.fit(free=free, method=[\"lbfgs\"])\nprint(mdf.summary())", "The likelihood drops by 0.3 when we fix the correlation parameter to 0. Comparing 2 x 0.3 = 0.6 to the chi^2 1 df reference distribution suggests that the data are very consistent with a model in which this parameter is equal to 0.\nHere is the same model fit using LMER in R (note that here R is reporting the REML criterion instead of the likelihood, where the REML criterion is twice the log likelihood):\nipython\n%R print(summary(lmer(\"Weight ~ Time + (1 | Pig) + (0 + Time | Pig)\", data=dietox)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: Weight ~ Time + (1 | Pig) + (0 + Time | Pig)\n Data: dietox\nREML criterion at convergence: 4434.7\nScaled residuals: \n Min 1Q Median 3Q Max \n-6.4281 -0.5527 -0.0405 0.4840 3.5661 \nRandom effects:\n Groups Name Variance Std.Dev.\n Pig (Intercept) 19.8404 4.4543\n Pig.1 Time 0.4234 0.6507\n Residual 6.0282 2.4552\nNumber of obs: 861, groups: Pig, 72\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 15.73875 0.55444 28.39\nTime 6.93899 0.08045 86.25\nCorrelation of Fixed Effects:\n (Intr)\nTime -0.086\n```\nSitka growth data\nThis is one of the example data sets provided in the LMER R library. The outcome variable is the size of the tree, and the covariate used here is a time value. The data are grouped by tree.", "data = sm.datasets.get_rdataset(\"Sitka\", \"MASS\").data\nendog = data[\"size\"]\ndata[\"Intercept\"] = 1\nexog = data[[\"Intercept\", \"Time\"]]", "Here is the statsmodels LME fit for a basic model with a random intercept. We are passing the endog and exog data directly to the LME init function as arrays. Also note that endog_re is specified explicitly in argument 4 as a random intercept (although this would also be the default if it were not specified).", "md = sm.MixedLM(endog, exog, groups=data[\"tree\"], exog_re=exog[\"Intercept\"])\nmdf = md.fit()\nprint(mdf.summary())", "Here is the same model fit in R using LMER:\nipython\n%R\ndata(Sitka, package=\"MASS\")\nprint(summary(lmer(\"size ~ Time + (1 | tree)\", data=Sitka)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: size ~ Time + (1 | tree)\n Data: Sitka\nREML criterion at convergence: 164.8\nScaled residuals: \n Min 1Q Median 3Q Max \n-2.9979 -0.5169 0.1576 0.5392 4.4012 \nRandom effects:\n Groups Name Variance Std.Dev.\n tree (Intercept) 0.37451 0.612 \n Residual 0.03921 0.198 \nNumber of obs: 395, groups: tree, 79\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 2.2732443 0.0878955 25.86\nTime 0.0126855 0.0002654 47.80\nCorrelation of Fixed Effects:\n (Intr)\nTime -0.611\n```\nWe can now try to add a random slope. We start with R this time. From the code and output below we see that the REML estimate of the variance of the random slope is nearly zero.\nipython\n%R print(summary(lmer(\"size ~ Time + (1 + Time | tree)\", data=Sitka)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: size ~ Time + (1 + Time | tree)\n Data: Sitka\nREML criterion at convergence: 153.4\nScaled residuals: \n Min 1Q Median 3Q Max \n-2.7609 -0.5173 0.1188 0.5270 3.5466 \nRandom effects:\n Groups Name Variance Std.Dev. Corr \n tree (Intercept) 2.217e-01 0.470842 \n Time 3.288e-06 0.001813 -0.17\n Residual 3.634e-02 0.190642 \nNumber of obs: 395, groups: tree, 79\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 2.273244 0.074655 30.45\nTime 0.012686 0.000327 38.80\nCorrelation of Fixed Effects:\n (Intr)\nTime -0.615\nconvergence code: 0\nModel failed to converge with max|grad| = 0.793203 (tol = 0.002, component 1)\nModel is nearly unidentifiable: very large eigenvalue\n - Rescale variables?\n ```\nIf we run this in statsmodels LME with defaults, we see that the variance estimate is indeed very small, which leads to a warning about the solution being on the boundary of the parameter space. The regression slopes agree very well with R, but the likelihood value is much higher than that returned by R.", "exog_re = exog.copy()\nmd = sm.MixedLM(endog, exog, data[\"tree\"], exog_re)\nmdf = md.fit()\nprint(mdf.summary())", "We can further explore the random effects structure by constructing plots of the profile likelihoods. We start with the random intercept, generating a plot of the profile likelihood from 0.1 units below to 0.1 units above the MLE. Since each optimization inside the profile likelihood generates a warning (due to the random slope variance being close to zero), we turn off the warnings here.", "import warnings\n\nwith warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n likev = mdf.profile_re(0, 're', dist_low=0.1, dist_high=0.1)", "Here is a plot of the profile likelihood function. We multiply the log-likelihood difference by 2 to obtain the usual $\\chi^2$ reference distribution with 1 degree of freedom.", "import matplotlib.pyplot as plt\n\nplt.figure(figsize=(10,8))\nplt.plot(likev[:,0], 2*likev[:,1])\nplt.xlabel(\"Variance of random slope\", size=17)\nplt.ylabel(\"-2 times profile log likelihood\", size=17)", "Here is a plot of the profile likelihood function. The profile likelihood plot shows that the MLE of the random slope variance parameter is a very small positive number, and that there is low uncertainty in this estimate.", "re = mdf.cov_re.iloc[1, 1]\nwith warnings.catch_warnings():\n # Parameter is often on the boundary\n warnings.simplefilter(\"ignore\", ConvergenceWarning)\n likev = mdf.profile_re(1, 're', dist_low=.5*re, dist_high=0.8*re)\n\nplt.figure(figsize=(10, 8))\nplt.plot(likev[:,0], 2*likev[:,1])\nplt.xlabel(\"Variance of random slope\", size=17)\nlbl = plt.ylabel(\"-2 times profile log likelihood\", size=17)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rmorshea/dstruct
examples/advanced.ipynb
mit
[ "advanced dstruct usage\n\n1. we repeate a part of the basic example to make an account summary", "from dstruct import DataStructFromJSON, DataField, datafield\n\n# taken from the basic example\nclass AccountSummaryFromFile(DataStructFromJSON):\n \n user = DataField()\n type = DataField('account', 'account-type')\n ballance = DataField('account', 'account-ballance')\n # you can pass functions under the keyword \"parser\" to parse raw data parsing\n account_number = DataField('account', 'account-number', parser=lambda s: 'X'*len(s[:-4])+s[-4:])\n\nAccountSummaryFromFile('data_files/bank_data.json')", "2. use nested DataStructs\n\nnesting DataStructs allows for much more complex parsing patterns.", "from dstruct import DataStruct\nfrom datetime import datetime\n\nclass Transaction(DataStruct):\n \n amount = DataField()\n\n @datafield(path=None)\n def time(self, data):\n s = data['utc-unix']\n tz = data['time-zone']\n if 'UTC' not in tz:\n raise ValueError(\"Unknow time-zone standard: '%s'\" % tz)\n else:\n dif = tz.replace('UTC', '')\n # trick for adding time-zone\n s = eval(str(s)+dif+'*60*60')\n\n dt = datetime.fromtimestamp(s)\n # return the parsed epoch time\n # for the given time-zone\n return dt.strftime('%Y-%m-%d %H:%M:%S')\n \n @datafield('source')\n def source(self, data):\n t = data['type']\n # we use the transaction type\n # to identify which kind of\n # `DataStruct` should be used\n if '-' in t:\n s = ''\n for sub in t.split('-'):\n s += sub.capitalize()\n else:\n s = t.capitalize()\n \n # we use `eval` to grab\n # the appropriate struct\n cls = eval(s)\n return cls(data)\n \n\nclass TransactionSource(DataStruct):\n \n ref = DataField()\n \nclass Purchase(TransactionSource):\n\n type = DataField()\n at = DataField('name')\n card = DataField(parser=lambda s: 'X'*len(s[:-4])+s[-4:])\n \nclass MobileDeposit(TransactionSource):\n\n type = DataField()\n note = DataField()\n check_number = DataField('check-number')\n\n\nclass DetailedAccountSummary(AccountSummaryFromFile):\n \n last_withdraw = DataField('account', 'withdrawn', '0', parser=Transaction)\n last_deposit = DataField('account', 'deposited', '0', parser=Transaction)", "The Parsed Account Summary", "print(DetailedAccountSummary('data_files/bank_data.json'))", "The Raw JSON Data:\n```\n{\n \"user\": \"John F. Doe\",\n \"billing-address\": \"123 Any Street Apt. 45 / Smallville, KS 1235\",\n \"account\": {\n \"account-type\": \"checking\",\n \"routing-number\": \"056004241\",\n \"account-number\": \"123456789\",\n \"account-ballance\": 1234.56,\n \"deposited\": {\n \"0\": {\n \"amount\": 1057.21,\n \"utc-unix\": 1457476491,\n \"time-zone\": \"UTC-8\",\n \"source\": {\n \"type\": \"mobile-deposit\",\n \"ref\": \" #IB3GFRZG31\",\n \"routing-number\": \"944145221\",\n \"account-number\": \"123123123\",\n \"check-number\": \"1229361\",\n \"note\": \"bi-weekly paycheck\"\n }\n },\n \"1\": {\n \"amount\": \"500.00\",\n \"utc-unix\": 1459376666,\n \"time-zone\": \"UTC-8\",\n \"source\": {\n \"type\": \"online-transfer\",\n \"ref\": \"#IBS5RWGWMM\",\n \"routing-number\": \"044072324\",\n \"account-number\": \"987654321\",\n \"note\": \"monthly refill\"\n }\n }\n },\n \"withdrawn\": {\n \"0\": {\n \"amount\": 23.03,\n \"utc-unix\": 1457476491,\n \"time-zone\": \"UTC-8\",\n \"source\": {\n \"card\": \"0123456789101112\",\n \"type\": \"purchase\",\n \"ref\": \"S567013305806010\",\n \"name\": \"Average-Restaurant\"\n }\n },\n \"1\": {\n \"amount\": 5.37,\n \"utc-unix\": 1457447400,\n \"time-zone\": \"UTC-8\",\n \"card\": \"0123456789101112\",\n \"source\": {\n \"type\": \"purchase\",\n \"ref\": \"S466013457060112\",\n \"name\": \"That-Super-Market\"\n }\n }\n }\n}\n\n}\n```" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
woobe/odsc_h2o_machine_learning
py_03c_regression_ensembles.ipynb
apache-2.0
[ "Machine Learning with H2O - Tutorial 3c: Regression Models (Ensembles)\n<hr>\n\nObjective:\n\nThis tutorial explains how to create stacked ensembles of regression models for better out-of-bag performance.\n\n<hr>\n\nWine Quality Dataset:\n\nSource: https://archive.ics.uci.edu/ml/datasets/Wine+Quality\nCSV (https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv)\n\n<hr>\n\nSteps:\n\nBuild GBM models using random grid search and extract the best one.\nBuild DRF models using random grid search and extract the best one. \nBuild DNN models using random grid search and extract the best one.\nUse model stacking to combining different models.\n\n<hr>\n\nFull Technical Reference:\n\nhttp://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/modeling.html\nhttp://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/stacked-ensembles.html\n\n<br>", "# Import all required modules\nimport h2o\nfrom h2o.estimators.gbm import H2OGradientBoostingEstimator\nfrom h2o.estimators.random_forest import H2ORandomForestEstimator\nfrom h2o.estimators.deeplearning import H2ODeepLearningEstimator\nfrom h2o.estimators.stackedensemble import H2OStackedEnsembleEstimator\nfrom h2o.grid.grid_search import H2OGridSearch\n\n# Start and connect to a local H2O cluster\nh2o.init(nthreads = -1)", "<br>", "# Import wine quality data from a local CSV file\nwine = h2o.import_file(\"winequality-white.csv\")\nwine.head(5)\n\n# Define features (or predictors)\nfeatures = list(wine.columns) # we want to use all the information\nfeatures.remove('quality') # we need to exclude the target 'quality' (otherwise there is nothing to predict)\nfeatures\n\n# Split the H2O data frame into training/test sets\n# so we can evaluate out-of-bag performance\nwine_split = wine.split_frame(ratios = [0.8], seed = 1234)\n\nwine_train = wine_split[0] # using 80% for training\nwine_test = wine_split[1] # using the rest 20% for out-of-bag evaluation\n\nwine_train.shape\n\nwine_test.shape", "<br>\nDefine Search Criteria for Random Grid Search", "# define the criteria for random grid search\nsearch_criteria = {'strategy': \"RandomDiscrete\", \n 'max_models': 9,\n 'seed': 1234}", "<br>\nStep 1: Build GBM Models using Random Grid Search and Extract the Best Model", "# define the range of hyper-parameters for GBM grid search\n# 27 combinations in total\nhyper_params = {'sample_rate': [0.7, 0.8, 0.9],\n 'col_sample_rate': [0.7, 0.8, 0.9],\n 'max_depth': [3, 5, 7]}\n\n# Set up GBM grid search\n# Add a seed for reproducibility\ngbm_rand_grid = H2OGridSearch(\n H2OGradientBoostingEstimator(\n model_id = 'gbm_rand_grid', \n seed = 1234,\n ntrees = 10000, \n nfolds = 5,\n fold_assignment = \"Modulo\", # needed for stacked ensembles\n keep_cross_validation_predictions = True, # needed for stacked ensembles\n stopping_metric = 'mse', \n stopping_rounds = 15, \n score_tree_interval = 1),\n search_criteria = search_criteria, \n hyper_params = hyper_params)\n\n# Use .train() to start the grid search\ngbm_rand_grid.train(x = features, \n y = 'quality', \n training_frame = wine_train)\n\n# Sort and show the grid search results\ngbm_rand_grid_sorted = gbm_rand_grid.get_grid(sort_by='mse', decreasing=False)\nprint(gbm_rand_grid_sorted)\n\n# Extract the best model from random grid search\nbest_gbm_model_id = gbm_rand_grid_sorted.model_ids[0]\nbest_gbm_from_rand_grid = h2o.get_model(best_gbm_model_id)\nbest_gbm_from_rand_grid.summary()", "<br>\nStep 2: Build DRF Models using Random Grid Search and Extract the Best Model", "# define the range of hyper-parameters for DRF grid search\n# 27 combinations in total\nhyper_params = {'sample_rate': [0.5, 0.6, 0.7],\n 'col_sample_rate_per_tree': [0.7, 0.8, 0.9],\n 'max_depth': [3, 5, 7]}\n\n# Set up DRF grid search\n# Add a seed for reproducibility\ndrf_rand_grid = H2OGridSearch(\n H2ORandomForestEstimator(\n model_id = 'drf_rand_grid', \n seed = 1234,\n ntrees = 200, \n nfolds = 5,\n fold_assignment = \"Modulo\", # needed for stacked ensembles\n keep_cross_validation_predictions = True), # needed for stacked ensembles\n search_criteria = search_criteria, \n hyper_params = hyper_params)\n\n# Use .train() to start the grid search\ndrf_rand_grid.train(x = features, \n y = 'quality', \n training_frame = wine_train)\n\n# Sort and show the grid search results\ndrf_rand_grid_sorted = drf_rand_grid.get_grid(sort_by='mse', decreasing=False)\nprint(drf_rand_grid_sorted)\n\n# Extract the best model from random grid search\nbest_drf_model_id = drf_rand_grid_sorted.model_ids[0]\nbest_drf_from_rand_grid = h2o.get_model(best_drf_model_id)\nbest_drf_from_rand_grid.summary()", "<br>\nStep 3: Build DNN Models using Random Grid Search and Extract the Best Model", "# define the range of hyper-parameters for DNN grid search\n# 81 combinations in total\nhyper_params = {'activation': ['tanh', 'rectifier', 'maxout'],\n 'hidden': [[50], [50,50], [50,50,50]],\n 'l1': [0, 1e-3, 1e-5],\n 'l2': [0, 1e-3, 1e-5]}\n\n# Set up DNN grid search\n# Add a seed for reproducibility\ndnn_rand_grid = H2OGridSearch(\n H2ODeepLearningEstimator(\n model_id = 'dnn_rand_grid', \n seed = 1234,\n epochs = 20, \n nfolds = 5,\n fold_assignment = \"Modulo\", # needed for stacked ensembles\n keep_cross_validation_predictions = True), # needed for stacked ensembles\n search_criteria = search_criteria, \n hyper_params = hyper_params)\n\n# Use .train() to start the grid search\ndnn_rand_grid.train(x = features, \n y = 'quality', \n training_frame = wine_train)\n\n# Sort and show the grid search results\ndnn_rand_grid_sorted = dnn_rand_grid.get_grid(sort_by='mse', decreasing=False)\nprint(dnn_rand_grid_sorted)\n\n# Extract the best model from random grid search\nbest_dnn_model_id = dnn_rand_grid_sorted.model_ids[0]\nbest_dnn_from_rand_grid = h2o.get_model(best_dnn_model_id)\nbest_dnn_from_rand_grid.summary()", "<br>\nModel Stacking", "# Define a list of models to be stacked\n# i.e. best model from each grid\nall_ids = [best_gbm_model_id, best_drf_model_id, best_dnn_model_id]\n\n# Set up Stacked Ensemble\nensemble = H2OStackedEnsembleEstimator(model_id = \"my_ensemble\",\n base_models = all_ids)\n\n# use .train to start model stacking\n# GLM as the default metalearner\nensemble.train(x = features, \n y = 'quality', \n training_frame = wine_train)", "<br>\nComparison of Model Performance on Test Data", "print('Best GBM model from Grid (MSE) : ', best_gbm_from_rand_grid.model_performance(wine_test).mse())\nprint('Best DRF model from Grid (MSE) : ', best_drf_from_rand_grid.model_performance(wine_test).mse())\nprint('Best DNN model from Grid (MSE) : ', best_dnn_from_rand_grid.model_performance(wine_test).mse())\nprint('Stacked Ensembles (MSE) : ', ensemble.model_performance(wine_test).mse())", "<br>\n<br>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
edeno/Jadhav-2016-Data-Analysis
notebooks/2017_06_14_Test_Spectral_Single_Session.ipynb
gpl-3.0
[ "%matplotlib inline\n%reload_ext autoreload\n%autoreload 2\n%qtconsole", "Purpose\nThe purpose of this notebook is to work out the data structure for saving the computed results for a single session. Here we are using the xarray package to structure the data, because:\n\nIt is built to handle large multi-dimensional data (orginally for earth sciences data).\nIt allows you to call dimensions by name (time, frequency, etc).\nThe plotting functions are convenient for multi-dimensional data (it has convenient heatmap plotting).\nIt can output to HDF5 (via the netcdf format, a geosciences data format), which is built for handling large data in a descriptive (i.e. can label units, add information about how data was constructed, etc.).\nLazily loads data so large datasets that are too big for memory can be handled (via dask).\n\nPreviously, I was using the pandas package in python and this wasn't handling the loading and combining of time-frequency data. In particular, the size of the data was problematic even on the cluster and this was frustrating to debug. pandas now recommends the usage of xarray for multi-dimesional data.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\nimport xarray as xr\n\nfrom src.data_processing import (get_LFP_dataframe, make_tetrode_dataframe,\n make_tetrode_pair_info, reshape_to_segments)\nfrom src.parameters import (ANIMALS, SAMPLING_FREQUENCY,\n MULTITAPER_PARAMETERS, FREQUENCY_BANDS,\n RIPPLE_COVARIATES, ALPHA)\nfrom src.analysis import (decode_ripple_clusterless,\n detect_epoch_ripples, is_overlap,\n _subtract_event_related_potential)", "Go through the steps to get the ripple triggered connectivity", "epoch_key = ('HPa', 6, 2)\n\nripple_times = detect_epoch_ripples(\n epoch_key, ANIMALS, sampling_frequency=SAMPLING_FREQUENCY)\n\ntetrode_info = make_tetrode_dataframe(ANIMALS)[epoch_key]\ntetrode_info = tetrode_info[\n ~tetrode_info.descrip.str.endswith('Ref').fillna(False)]\ntetrode_pair_info = make_tetrode_pair_info(tetrode_info)\nlfps = {tetrode_key: get_LFP_dataframe(tetrode_key, ANIMALS)\n for tetrode_key in tetrode_info.index}\n\nfrom copy import deepcopy\nfrom functools import partial, wraps\n\nmultitaper_parameter_name = '4Hz_Resolution'\nmultitaper_params = MULTITAPER_PARAMETERS[multitaper_parameter_name]\nnum_lfps = len(lfps)\nnum_pairs = int(num_lfps * (num_lfps - 1) / 2)\nparams = deepcopy(multitaper_params)\nwindow_of_interest = params.pop('window_of_interest')\nreshape_to_trials = partial(\n reshape_to_segments,\n sampling_frequency=params['sampling_frequency'],\n window_offset=window_of_interest, concat_axis=1)\n\nripple_locked_lfps = pd.Panel({\n lfp_name: _subtract_event_related_potential(\n reshape_to_trials(lfps[lfp_name], ripple_times))\n for lfp_name in lfps})\n\nfrom src.spectral.connectivity import Connectivity\nfrom src.spectral.transforms import Multitaper\n\nm = Multitaper(\n np.rollaxis(ripple_locked_lfps.values, 0, 3),\n **params,\n start_time=ripple_locked_lfps.major_axis.min())\nc = Connectivity(\n fourier_coefficients=m.fft(),\n frequencies=m.frequencies,\n time=m.time)", "Make an xarray dataset for coherence and pairwise spectral granger", "n_lfps = len(lfps)\nds = xr.Dataset(\n {'coherence_magnitude': (['time', 'frequency', 'tetrode1', 'tetrode2'], c.coherence_magnitude()),\n 'pairwise_spectral_granger_prediction': (['time', 'frequency', 'tetrode1', 'tetrode2'], c.pairwise_spectral_granger_prediction())},\n coords={'time': c.time + np.diff(c.time)[0] / 2, \n 'frequency': c.frequencies + np.diff(c.frequencies)[0] / 2,\n 'tetrode1': tetrode_info.tetrode_id.values,\n 'tetrode2': tetrode_info.tetrode_id.values,\n 'brain_area1': ('tetrode1', tetrode_info.area.tolist()),\n 'brain_area2': ('tetrode2', tetrode_info.area.tolist()),\n 'session': np.array(['{0}_{1:02d}_{2:02d}'.format(*epoch_key)]),\n }\n)\n\nds", "Show that it is easy to select two individual tetrodes and plot a subset of their frequency for coherence.", "ds.sel(\n tetrode1='HPa621',\n tetrode2='HPa624',\n frequency=slice(0, 30)).coherence_magnitude.plot(x='time', y='frequency');", "Show the same thing for spectral granger.", "ds.sel(\n tetrode1='HPa621',\n tetrode2='HPa6220',\n frequency=slice(0, 30)\n).pairwise_spectral_granger_prediction.plot(x='time', y='frequency');", "Now show that we can plot all tetrodes pairs in a dataset", "ds['pairwise_spectral_granger_prediction'].sel(\n frequency=slice(0, 30)).plot(x='time', y='frequency', col='tetrode1', row='tetrode2', robust=True);\n\nds['coherence_magnitude'].sel(\n frequency=slice(0, 30)).plot(x='time', y='frequency', col='tetrode1', row='tetrode2');", "It is also easy to select a subset of tetrode pairs (in this case all CA1-PFC tetrode pairs).", "(ds.sel(\n tetrode1=ds.tetrode1[ds.brain_area1=='CA1'],\n tetrode2=ds.tetrode2[ds.brain_area2=='PFC'],\n frequency=slice(0, 30))\n .coherence_magnitude\n .plot(x='time', y='frequency', col='tetrode1', row='tetrode2'));", "xarray also makes it easy to compare the difference of a connectivity measure from its baseline (in this case, the baseline is the first time bin)", "((ds - ds.isel(time=0)).sel(\n tetrode1=ds.tetrode1[ds.brain_area1=='CA1'],\n tetrode2=ds.tetrode2[ds.brain_area2=='PFC'],\n frequency=slice(0, 30))\n .coherence_magnitude\n .plot(x='time', y='frequency', col='tetrode1', row='tetrode2'));", "It is also easy to average over the tetrode pairs", "(ds.sel(\n tetrode1=ds.tetrode1[ds.brain_area1=='CA1'],\n tetrode2=ds.tetrode2[ds.brain_area2=='PFC'],\n frequency=slice(0, 30))\n .coherence_magnitude.mean(['tetrode1', 'tetrode2'])\n .plot(x='time', y='frequency'));", "And also average over the difference", "((ds - ds.isel(time=0)).sel(\n tetrode1=ds.tetrode1[ds.brain_area1=='CA1'],\n tetrode2=ds.tetrode2[ds.brain_area2=='PFC'],\n frequency=slice(0, 30))\n .coherence_magnitude.mean(['tetrode1', 'tetrode2'])\n .plot(x='time', y='frequency'));", "Test saving as netcdf file", "import os\n\npath = '{0}_{1:02d}_{2:02d}.nc'.format(*epoch_key)\ngroup = '{0}/'.format(multitaper_parameter_name)\nwrite_mode = 'a' if os.path.isfile(path) else 'w'\nds.to_netcdf(path=path, group=group, mode=write_mode)", "Show that we can open the saved dataset and recover the data", "with xr.open_dataset(path, group=group) as da:\n da.load()\n print(da)", "Make data structure for group delay", "n_bands = len(FREQUENCY_BANDS)\ndelay, slope, r_value = (np.zeros((c.time.size, n_bands, m.n_signals, m.n_signals)),) * 3\n\nfor band_ind, frequency_band in enumerate(FREQUENCY_BANDS):\n (delay[:, band_ind, ...],\n slope[:, band_ind, ...],\n r_value[:, band_ind, ...]) = c.group_delay(\n FREQUENCY_BANDS[frequency_band], frequency_resolution=m.frequency_resolution)\n \ncoordinate_names = ['time', 'frequency_band', 'tetrode1', 'tetrode2']\nds = xr.Dataset(\n {'delay': (coordinate_names, delay),\n 'slope': (coordinate_names, slope),\n 'r_value': (coordinate_names, r_value)},\n coords={'time': c.time + np.diff(c.time)[0] / 2, \n 'frequency_band': list(FREQUENCY_BANDS.keys()),\n 'tetrode1': tetrode_info.tetrode_id.values,\n 'tetrode2': tetrode_info.tetrode_id.values,\n 'brain_area1': ('tetrode1', tetrode_info.area.tolist()),\n 'brain_area2': ('tetrode2', tetrode_info.area.tolist()),\n 'session': np.array(['{0}_{1:02d}_{2:02d}'.format(*epoch_key)]),\n }\n)\n\nds['delay'].sel(frequency_band='beta', tetrode1='HPa621', tetrode2='HPa622').plot();", "Make data structure for canonical coherence", "canonical_coherence, area_labels = c.canonical_coherence(tetrode_info.area.tolist())\ndimension_names = ['time', 'frequency', 'brain_area1', 'brain_area2']\ndata_vars = {'canonical_coherence': (dimension_names, canonical_coherence)}\ncoordinates = {\n 'time': c.time + np.diff(c.time)[0] / 2,\n 'frequency': c.frequencies + np.diff(c.frequencies)[0] / 2,\n 'brain_area1': area_labels,\n 'brain_area2': area_labels,\n 'session': np.array(['{0}_{1:02d}_{2:02d}'.format(*epoch_key)]),\n}\nds = xr.Dataset(data_vars, coords=coordinates)\n\nds.sel(brain_area1='CA1', brain_area2='PFC', frequency=slice(0, 30)).canonical_coherence.plot(x='time', y='frequency')", "Now after adding this code into the code base, test if we can compute, save, and load", "from src.analysis import ripple_triggered_connectivity\n\nfor parameters_name, parameters in MULTITAPER_PARAMETERS.items():\n ripple_triggered_connectivity(\n lfps, epoch_key, tetrode_info, ripple_times, parameters,\n FREQUENCY_BANDS,\n multitaper_parameter_name=parameters_name,\n group_name='all_ripples')\n\nwith xr.open_dataset(path, group='2Hz_Resolution/all_ripples/canonical_coherence') as da:\n da.load()\n print(da)\n da.sel(brain_area1='CA1', brain_area2='PFC', frequency=slice(0, 30)).canonical_coherence.plot(x='time', y='frequency')\n\nwith xr.open_dataset(path, group='10Hz_Resolution/all_ripples/canonical_coherence') as da:\n da.load()\n print(da)\n da.sel(brain_area1='CA1', brain_area2='PFC', frequency=slice(0, 30)).canonical_coherence.plot(x='time', y='frequency')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
robertoalotufo/ia898
2S2018/13 Correlacao de fase.ipynb
mit
[ "Correlação de Fase\nA correlação de fase diz que, se calcularmos a Transformada Discreta de Fourier de duas imagens $f$ and $h$:\n$$ F = \\mathcal{F}(f); $$$$ H = \\mathcal{F}(h) $$\nE em seguida calcularmos a correlação $R$ das transformadas: \n$$ R = \\dfrac{F H^}{|F H^|} $$\nDepois, aplicarmos a transformada inversa a $R$\n$$ g = \\mathcal{F}^{-1}(R) $$\nA translação entre as duas imagens pode ser encontrada fazendo-se:\n$$ (row, col) = arg max{g} $$\nIdentificando a translação entre 2 imagens\n\nCalcular a Transformada de Fourier das 2 imagens que se quer comparar;\nCalcular a correlação de fase usando a função phasecorr\nEncontrar o ponto de máximo do mapa de correlação resultante", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nfrom numpy.fft import *\nimport sys,os\nia898path = os.path.abspath('../../')\nif ia898path not in sys.path:\n sys.path.append(ia898path)\nimport ia898.src as ia\n\n\nf = mpimg.imread('../data/cameraman.tif')\n\n# Transladando a imagem para (x,y)\nx = 20\ny = 30\n\n#f_trans = ia.ptrans(f,(20,30))\nf_trans = np.zeros(f.shape)\nf_trans[x:,y:] = f[:-x,:-y]\n\nplt.figure(1,(10,10))\nplt.subplot(1,2,1)\nplt.imshow(f, cmap='gray')\nplt.title('Imagem original')\n\nplt.subplot(1,2,2)\nplt.imshow(f_trans, cmap='gray')\nplt.title('Imagem transladada')\n\n# Calculando a correlação de fase\ng = ia.phasecorr(f,f_trans)\n\n# Encontrando o ponto de máxima correlação \ni = np.argmax(g)\nrow,col = np.unravel_index(i,g.shape)\nv = np.array(f.shape) - np.array((row,col))\nprint('Ponto de máxima correlação: ',v)\n\nplt.figure(2,(6,6))\nf[v[0]-1:v[0]+1,v[1]-1:v[1]+1] = 0\nplt.imshow(f, cmap='gray')\nplt.title('Ponto de máxima correlação marcado (em preto)')", "Identificando a rotação entre 2 imagens\n\nCalcular a Transformada de Fourier das 2 imagens que se quer comparar;\nConverter as imagens obtidas para coordenadas polares \nCalcular a correlação de fase usando a função phasecorr\nEncontrar o ponto de máximo do mapa de correlação resultante", "f = mpimg.imread('../data/cameraman.tif')\n\n# Inserindo uma borda de zeros para permitir a rotação da imagem\nt = np.zeros(np.array(f.shape)+100,dtype=np.uint8)\nt[50:f.shape[0]+50,50:f.shape[1]+50] = f\nf = t\n \nt1 = np.array([\n [1,0,-f.shape[0]/2.],\n [0,1,-f.shape[1]/2.],\n [0,0,1]]);\n\nt2 = np.array([\n [1,0,f.shape[0]/2.],\n [0,1,f.shape[1]/2.],\n [0,0,1]]);\n\n# Rotacionando a imagem 30 graus\ntheta = np.radians(30)\nr1 = np.array([\n [np.cos(theta),-np.sin(theta),0],\n [np.sin(theta),np.cos(theta),0],\n [0,0,1]]);\n \nT = t2.dot(r1).dot(t1)\nf_rot = ia.affine(f,T,0)\n\nplt.figure(1,(10,10))\nplt.subplot(1,2,1)\nplt.imshow(f, cmap='gray')\nplt.title('Imagem original')\n\nplt.subplot(1,2,2)\nplt.imshow(f_rot, cmap='gray')\nplt.title('Imagem rotacionada')\n\nW,H = f.shape\nf_polar = ia.polar(f,(150,200),2*np.pi)\nf_rot_polar = ia.polar(f_rot,(150,200),2*np.pi)\n\nplt.figure(1,(10,10))\nplt.subplot(1,2,1)\nplt.imshow(f_polar, cmap='gray')\nplt.title('Imagem original (coord. polar)')\n\nplt.subplot(1,2,2)\nplt.imshow(f_rot_polar, cmap='gray')\nplt.title('Imagem rotacionada (coord. polar)')\n\n# Calculando a correlação de fase\ng = ia.phasecorr(f_polar,f_rot_polar)\n\n# Encontrando o ponto de máxima correlação \ni = np.argmax(g)\ncorr = np.unravel_index(i,g.shape)\n\n# Calculate the angle\nang = (float(corr[1])/g.shape[1])*360\nprint('Ponto de máxima correlação: ',ang)", "Links\n\nFunção de conversão para coordenadas polares\nFunção da correlação de fase" ]
[ "markdown", "code", "markdown", "code", "markdown" ]