repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
mit-eicu/eicu-code
notebooks/patient.ipynb
mit
# Import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import psycopg2 import getpass import pdvega # for configuring connection from configobj import ConfigObj import os %matplotlib inline # Create a database connection using settings from config file config='../db/config.ini' # connection info conn_info = dict() if os.path.isfile(config): config = ConfigObj(config) conn_info["sqluser"] = config['username'] conn_info["sqlpass"] = config['password'] conn_info["sqlhost"] = config['host'] conn_info["sqlport"] = config['port'] conn_info["dbname"] = config['dbname'] conn_info["schema_name"] = config['schema_name'] else: conn_info["sqluser"] = 'postgres' conn_info["sqlpass"] = '' conn_info["sqlhost"] = 'localhost' conn_info["sqlport"] = 5432 conn_info["dbname"] = 'eicu' conn_info["schema_name"] = 'public,eicu_crd' # Connect to the eICU database print('Database: {}'.format(conn_info['dbname'])) print('Username: {}'.format(conn_info["sqluser"])) if conn_info["sqlpass"] == '': # try connecting without password, i.e. peer or OS authentication try: if (conn_info["sqlhost"] == 'localhost') & (conn_info["sqlport"]=='5432'): con = psycopg2.connect(dbname=conn_info["dbname"], user=conn_info["sqluser"]) else: con = psycopg2.connect(dbname=conn_info["dbname"], host=conn_info["sqlhost"], port=conn_info["sqlport"], user=conn_info["sqluser"]) except: conn_info["sqlpass"] = getpass.getpass('Password: ') con = psycopg2.connect(dbname=conn_info["dbname"], host=conn_info["sqlhost"], port=conn_info["sqlport"], user=conn_info["sqluser"], password=conn_info["sqlpass"]) query_schema = 'set search_path to ' + conn_info['schema_name'] + ';' """ Explanation: patient The patinet table is a core part of the eICU-CRD and contains all information related to tracking patient unit stays. The table also contains patient demographics and hospital level information. End of explanation """ uniquepid = '002-33870' query = query_schema + """ select * from patient where uniquepid = '{}' """.format(uniquepid) df = pd.read_sql_query(query, con) df.head() """ Explanation: uniquePid The uniquePid column identifies a single patient across multiple stays. Let's look at a single uniquepid. End of explanation """ df[['patientunitstayid', 'wardid', 'unittype', 'unitstaytype', 'hospitaladmitoffset', 'unitdischargeoffset']] """ Explanation: Here we see two unit stays for a single patient. Note also that both unit stays have the same patienthealthsystemstayid - this indicates that they occurred within the same hospitalization. We can see the unitstaytype was 'admit' for one stay, and 'stepdown/other' for another. Other columns can give us more information. End of explanation """ query = query_schema + """ select age, count(*) as n from patient group by age order by n desc """ df = pd.read_sql_query(query, con) df.head() """ Explanation: Note that it's not explicitly obvious which stay occurred first. Earlier stays will be closer to hospital admission, and therefore have a higher hospitaladmitoffset. Above, the stay with a hospitaladmitoffset of -14 was first (occurring 14 minutes after hospital admission), followed by the next stay with a hospitaladmitoffset of 22 (which occurred 22 minutes after hospital admission). Practically, we wouldn't consider the first admission a "real" ICU stay, and it's likely an idiosyncrasy of the administration system at this particular hospital. Notice how both rows have the same wardid. Age As ages over 89 are required to be deidentified by HIPAA, the age column is actually a string field, with ages over 89 replaced with the string value '> 89'. End of explanation """
Jim00000/Numerical-Analysis
2_Systems_Of_Equations.ipynb
unlicense
# Import modules import sys import numpy as np import numpy.linalg import scipy import sympy import sympy.abc from scipy import linalg from scipy.sparse import linalg as slinalg """ Explanation: CHAPTER 2 - Systems Of Equations End of explanation """ def naive_gaussian_elimination(matrix): """ A simple gaussian elimination to solve equations Args: matrix : numpy 2d array Returns: mat : The matrix processed by gaussian elimination x : The roots of the equation Raises: ValueError: - matrix is null RuntimeError : - Zero pivot encountered """ if matrix is None : raise ValueError('args matrix is null') #Clone the matrix mat = matrix.copy().astype(np.float64) # Row Size m = mat.shape[0] # Column Size n = mat.shape[1] # Gaussian Elimaination for i in range(0, m): if np.abs(mat[i , i]) == 0 : raise RuntimeError('zero pivot encountered') for j in range(i + 1, m): mult = mat[j, i] / mat[i, i] for k in range(i, m): mat[j, k] -= mult * mat[i, k] mat[j, n - 1] -= mult * mat[i, n - 1] # Back Substitution x = np.zeros(m, dtype=np.float64) for i in range(m - 1,-1,-1): for j in range(i + 1, m): mat[i, n-1] = mat[i ,n-1] - mat[i,j] * x[j] mat[i, j] = 0.0 x[i] = mat[i, n-1] / mat[i, i] return mat, x """ Explanation: 2.1 Gaussian Elimination End of explanation """ """ Input: [[ 1 2 -1 3] [ 2 1 -2 3] [-3 1 1 -6]] """ input_mat = np.array([1, 2, -1, 3, 2, 1, -2, 3, -3, 1, 1, -6]) input_mat = input_mat.reshape(3, 4) output_mat, x = naive_gaussian_elimination(input_mat) print(output_mat) print('[x, y, z] = {}'.format(x)) """ Explanation: Example Apply Gaussian elimination in tableau form for the system of three equations in three unknowns: $$ \large \begin{matrix} x + 2y - z = 3 & \ 2x + y - 2z = 3 & \ -3x + y + z = -6 \end{matrix} $$ End of explanation """ input_mat = np.array([ [ 1, 2, -1, 3], [-3, 1, 1, -6], [ 2, 0, 1, 8] ]) output_mat, x = naive_gaussian_elimination(input_mat) print(output_mat) print('[x, y, z] = {}'.format(x)) """ Explanation: Additional Examples Put the system $x + 2y - z = 3,-3x + y + z = -6,2x + z = 8$ into tableau form and solve by Gaussian elimination. End of explanation """ def computer_problems2__2_1(n): # generate Hilbert matrix H H = scipy.linalg.hilbert(n) # generate b b = np.ones(n).reshape(n, 1) # combine H:b in tableau form mat = np.hstack((H, b)) # gaussian elimination _, x = naive_gaussian_elimination(mat) return x with np.printoptions(precision = 6, suppress = True): print('(a) n = 2 → x = {}'.format(computer_problems2__2_1( 2))) print('(b) n = 5 → x = {}'.format(computer_problems2__2_1( 5))) print('(c) n = 10 → x = {}'.format(computer_problems2__2_1(10))) """ Explanation: 2.1 Computer Problems Put together the code fragments in this section to create a MATLAB program for “naive” Gaussian elimination (meaning no row exchanges allowed). Use it to solve the systems of Exercise 2. See my implementation naive_gaussian_elimination in python. Let $H$ denote the $n \times n$ Hilbert matrix, whose $(i, j)$ entry is $1 / (i + j - 1)$. Use the MATLAB program from Computer Problem 1 to solve $Hx = b$, where $b$ is the vector of all ones, for (a) n = 2 (b) n = 5 (c) n = 10. End of explanation """ def LU_factorization(matrix): """ LU decomposition Arguments: matrix : numpy 2d array Return: L : lower triangular matrix U : upper triangular matrix Raises: ValueError: - matrix is null - matrix is not a 2d array RuntimeError : - zero pivot encountered """ if matrix is None : raise ValueError('args matrix is null') if matrix.ndim != 2 : raise ValueError('matrix is not a 2d-array') # dimension dim = matrix.shape[0] # Prepare LU matrixs L = np.identity(dim).astype(np.float64) U = matrix.copy().astype(np.float64) # Gaussian Elimaination for i in range(0, dim - 1): # Check pivot is not zero if np.abs(U[i , i]) == 0 : raise RuntimeError('zero pivot encountered') for j in range(i + 1, dim): mult = U[j, i] / U[i, i] for k in range(i, dim): U[j, k] -= mult * U[i, k] L[j, i] = mult return L, U """ Explanation: 2.2 The LU Factorization End of explanation """ A = np.array([ [1, 1], [3, -4] ]) L, U = LU_factorization(A) print('L = ') print(L) print() print('U = ') print(U) """ Explanation: DEFINITION 2.2 An $m \times n$ matrix $L$ is lower triangular if its entries satisfy $l_{ij} = 0$ for $i < j$. An $m \times n$ matrix $U$ is upper triangular if its entries satisfy $u_{ij} = 0$ for $i > j$. Example Find the LU factorization for the matrix $A$ in $$ \large \begin{bmatrix} 1 & 1 \ 3 & -4 \ \end{bmatrix} $$ End of explanation """ A = np.array([ [ 1, 2, -1], [ 2, 1, -2], [-3, 1, 1] ]) L, U = LU_factorization(A) print('L = ') print(L) print() print('U = ') print(U) """ Explanation: Example Find the LU factorization of A = $$ \large \begin{bmatrix} 1 & 2 & -1 \ 2 & 1 & -2 \ -3 & 1 & 1 \ \end{bmatrix} $$ End of explanation """ A = np.array([ [ 1, 1], [ 3, -4] ]) b = np.array([3, 2]).reshape(2, 1) L, U = LU_factorization(A) # calculate Lc = b where Ux = c mat = np.hstack((L, b)) c = naive_gaussian_elimination(mat)[1].reshape(2, 1) # calculate Ux = c mat = np.hstack((U, c)) x = naive_gaussian_elimination(mat)[1].reshape(2, 1) # output the result print('x1 = {}, x2 = {}'.format(x[0][0], x[1][0])) """ Explanation: Example Solve system $$ \large \begin{bmatrix} 1 & 1 \ 3 & -4 \ \end{bmatrix} \begin{bmatrix} x_1 \ x_2 \ \end{bmatrix} = \begin{bmatrix} 3 \ 2 \ \end{bmatrix} $$ , using the LU factorization End of explanation """ A = np.array([ [ 1, 2, -1], [ 2, 1, -2], [-3, 1, 1] ]) b = np.array([3, 3, -6]).reshape(3, 1) L, U = LU_factorization(A) # calculate Lc = b where Ux = c mat = np.hstack((L, b)) c = naive_gaussian_elimination(mat)[1].reshape(3, 1) # calculate Ux = c mat = np.hstack((U, c)) x = naive_gaussian_elimination(mat)[1].reshape(3, 1) # output the result print('x1 = {}, x2 = {}, x3 = {}'.format(x[0][0], x[1][0], x[2][0])) """ Explanation: Example Solve system \begin{matrix} x + 2y - z = 3 & \ 2x + y - 2z = 3 & \ -3x + y + z = -6 \end{matrix} using the LU factorization End of explanation """ A = np.array([ [ 2, 4, -2], [ 1, -2, 1], [ 4, -4, 8] ]) b = np.array([6, 3, 0]).reshape(3, 1) L, U = LU_factorization(A) # calculate Lc = b where Ux = c mat = np.hstack((L, b)) c = naive_gaussian_elimination(mat)[1].reshape(3, 1) # calculate Ux = c mat = np.hstack((U, c)) x = naive_gaussian_elimination(mat)[1].reshape(3, 1) # output the result print('x1 = {}, x2 = {}, x3 = {}'.format(x[0][0], x[1][0], x[2][0])) """ Explanation: Additional Examples Solve $$ \large \begin{bmatrix} 2 & 4 & -2 \ 1 & -2 & 1 \ 4 & -4 & 8 \ \end{bmatrix} \begin{bmatrix} x_1 \ x_2 \ x_3 \ \end{bmatrix} = \begin{bmatrix} 6 \ 3 \ 0 \ \end{bmatrix} $$ using the A = LU factorization End of explanation """ # Exercise 2 - (a) A = np.array([ [ 3, 1, 2], [ 6, 3, 4], [ 3, 1, 5] ]) L, U = LU_factorization(A) print('L = ') print(L) print() print('U = ') print(U) # Exercise 2 - (b) A = np.array([ [ 4, 2, 0], [ 4, 4, 2], [ 2, 2, 3] ]) L, U = LU_factorization(A) print('L = ') print(L) print() print('U = ') print(U) # Exercise 2 - (c) A = np.array([ [ 1, -1, 1, 2], [ 0, 2, 1, 0], [ 1, 3, 4, 4], [ 0, 2, 1, -1] ]) L, U = LU_factorization(A) print('L = ') print(L) print() print('U = ') print(U) """ Explanation: 2.2 Computer Problems Use the code fragments for Gaussian elimination in the previous section to write a MATLAB script to take a matrix A as input and output L and U. No row exchanges are allowed - the program should be designed to shut down if it encounters a zero pivot. Check your program by factoring the matrices in Exercise 2. See my implementation LU_factorization in python. End of explanation """ def LU_factorization_with_back_substitution(A, b): """ LU decomposition with two-step back substitution where Ax = b Arguments: A : coefficient matrix b : constant vector Return: x : solution vector """ L, U = LU_factorization(A) # row size rowsz = b.size # calculate Lc = b where Ux = c matrix = np.hstack((L, b)) c = naive_gaussian_elimination(matrix)[1].reshape(rowsz, 1) # calculate Ux = c matrix = np.hstack((U, c)) x = naive_gaussian_elimination(matrix)[1].reshape(rowsz) return x # Exercise 4 - (a) A = np.array([ [ 3, 1, 2], [ 6, 3, 4], [ 3, 1, 5] ]) b = np.array([0, 1, 3]).reshape(3, 1) x = LU_factorization_with_back_substitution(A, b) print(x) # Exercise 4 - (b) A = np.array([ [ 4, 2, 0], [ 4, 4, 2], [ 2, 2, 3] ]) b = np.array([2, 4, 6]).reshape(3, 1) x = LU_factorization_with_back_substitution(A, b) print(x) """ Explanation: Add two-step back substitution to your script from Computer Problem 1, and use it to solve the systems in Exercise 4. End of explanation """ A = np.array([ [ 1, 1], [ 3, -4] ]) b = np.array([3, 2]) xa = np.array([1, 1]) # Get correct solution system = sympy.Matrix(((1, 1, 3), (3, -4, 2))) solver = sympy.solve_linear_system(system, sympy.abc.x, sympy.abc.y) # Packed as list x = np.array([solver[sympy.abc.x].evalf(), solver[sympy.abc.y].evalf()]) # Output print(x) # Get backward error (differences in the input) residual = b - np.matmul(A, xa) backward_error = np.max(np.abs(residual)) print('backward error is {:f}'.format(backward_error)) # Get fowrawd error (differences in the output) forward_error = np.max(np.abs(x - xa)) print('forward error is {:f}'.format(forward_error)) """ Explanation: 2.3 Sources Of Error DEFINITION 2.3 The infinity norm, or maximum norm, of the vector $x = (x_1, \cdots, x_n)$ is $||x||_{\infty} = \text{max}|x_i|, i = 1,\cdots,n$, that is, the maximum of the absolute values of the components of x. DEFINITION 2.4 Let $x_a$ be an approximate solution of the linear system $Ax = b$. The residual is the vector $r = b - Ax_a$. The backward error is the norm of the residual $||b - Ax_a||{\infty}$, and the forward error is $||x - x_a||{\infty}$. Example Find the backward and forward errors for the approximate solution $x_a = [1, 1]$ of the system $$ \large \begin{bmatrix} 1 & 1 \ 3 & -4 \ \end{bmatrix} \begin{bmatrix} x_1 \ x_2 \ \end{bmatrix} = \begin{bmatrix} 3 \ 2 \ \end{bmatrix} $$ End of explanation """ A = np.array([ [ 1, 1], [ 1.0001, 1], ]) b = np.array([2, 2.0001]) # approximated solution xa = np.array([-1, 3.0001]) # correct solution x = LU_factorization_with_back_substitution(A, b.reshape(2, 1)) # Get backward error residual = b - np.matmul(A, xa) backward_error = np.max(np.abs(residual)) print('backward error is {:f}'.format(backward_error)) # Get fowrawd error forward_error = np.max(np.abs(x - xa)) print('forward error is {:f}'.format(forward_error)) """ Explanation: Example Find the forward and backward errors for the approximate solution [-1, 3.0001] of the system $$ \large \begin{align} x_1 + x_2 &= 2 \ 1.0001 x_1 + x_2 &= 2.0001 \ \end{align} $$ End of explanation """ def matrix_norm(A): rowsum = np.sum(np.abs(A), axis = 1) return np.max(rowsum) """ Explanation: The relative backward error of system $Ax = b$ is defined to be $\large \frac{||r||{\infty}}{||b||{\infty}}$. The relative forward error is $\large \frac{||x - x_a||{\infty}}{||x||{\infty}}$. The error magnification factor for $Ax = b$ is the ratio of the two, or $\large \text{error magnification factor} = \frac{\text{relative forward error}}{\text{relative backward error}} = \frac{\frac{||x - x_a||{\infty}}{||x||{\infty}}}{\frac{||r||{\infty}}{||b||{\infty}}}$ DEFINITION 2.5 The condition number of a square matrix A, cond(A), is the maximum possible error magnification factor for solving Ax = b, over all right-hand sides b. The matrix norm of an n x n matrix A as $$ \large ||A||_{\infty} = \text{maximum absolute row sum} $$ End of explanation """ def condition_number(A): inv_A = np.linalg.inv(A) cond = matrix_norm(A) * matrix_norm(inv_A) return cond """ Explanation: THEOREM 2.6 The condition number of the n x n matrix A is $$ \large cond(A) = ||A|| \cdot ||A^{-1}|| $$ End of explanation """ A = np.array([ [ 811802, 810901], [ 810901, 810001], ]) print('determinant of A is {}'.format(scipy.linalg.det(A))) print('condition number : {:.4e}'.format(condition_number(A))) """ Explanation: Additional Examples Find the determinant and the condition number (in the infinity norm) of the matrix $$ \large \begin{bmatrix} 811802 & 810901 \ 810901 & 810001 \ \end{bmatrix} $$ End of explanation """ A = np.array([ [ 2, 4.01], [ 3, 6.00], ]) b = np.array([6.01, 9]) # approximated solution xa = np.array([21, -9]) # correct solution x = LU_factorization_with_back_substitution(A, b.reshape(2, 1)) # forward error forward_error = np.max(np.abs(x - xa)) # relative forward error relative_forward_error = forward_error / np.max(np.abs(x)) # backward error backward_error = np.max(np.abs(b - np.matmul(A, xa))) # relative backward error relative_backward_error = backward_error / np.max(np.abs(b)) # error magnification factor error_magnification_factor = relative_forward_error / relative_backward_error print('relative forward error : {}'.format(relative_forward_error)) print('relative backward error : {}'.format(relative_backward_error)) print('error magnification factor : {}'.format(error_magnification_factor)) """ Explanation: The solution of the system $$ \large \begin{bmatrix} 2 & 4.01 \ 3 & 6 \ \end{bmatrix} \begin{bmatrix} x_1 \ x_2 \ \end{bmatrix} = \begin{bmatrix} 6.01 \ 9 \ \end{bmatrix} $$ is $[1, 1]$ (a) Find the relative forward and backward errors and error magnification (in the infinity norm) for the approximate solution [21,-9]. End of explanation """ A = np.array([ [ 2, 4.01], [ 3, 6.00], ]) print('condition number : {}'.format(condition_number(A))) """ Explanation: (b) Find the condition number of the coefficient matrix. End of explanation """ def system_provider(n, data_generator): A = np.zeros([n, n]) x = np.ones(n) for i in range(n): for j in range(n): A[i, j] = data_generator(i + 1, j + 1) b = np.matmul(A, x) return A, x, b def problem_2_3_1_generic_solver(n, data_generator): A, x, b = system_provider(n, data_generator) xc = np.linalg.solve(A, b) # forward error forward_error = np.max(np.abs(x - xc)) # relative forward error relative_forward_error = forward_error / np.max(np.abs(x)) # backward error backward_error = np.max(np.abs(b - np.matmul(A, xc))) # relative backward error relative_backward_error = backward_error / np.max(np.abs(b)) # error magnification factor error_magnification_factor = relative_forward_error / relative_backward_error # condition number condA = condition_number(A) return forward_error, error_magnification_factor, condA def problem_2_3_1_solver(n): return problem_2_3_1_generic_solver(n, lambda i, j : 5 / (i + 2 * j - 1)) # (a) n = 6 print('(a) n = 6, forward error = {:.3g}, error magnification factor = {:.3g}, condition number = {:.3g}'.format(*problem_2_3_1_solver(6))) # (b) n = 10 print('(b) n = 10, forward error = {:.3g}, error magnification factor = {:.3g}, condition number = {:.3g}'.format(*problem_2_3_1_solver(10))) """ Explanation: 2.3 Computer Problems For the n x n matrix with entries $A_{ij} = 5 / (i + 2j - 1)$, set $x = [1,\cdots,1]^T$ and $b = Ax$. Use the MATLAB program from Computer Problem 2.1.1 or MATLAB’s backslash command to compute $x_c$, the double precision computed solution. Find the infinity norm of the forward error and the error magnification factor of the problem $Ax = b$, and compare it with the condition number of A: (a) n = 6 (b) n = 10. End of explanation """ def problem_2_3_2_solver(n): return problem_2_3_1_generic_solver(n, lambda i, j : 1 / (np.abs(i - j) + 1)) # (a) n = 6 print('(a) n = 6, forward error = {:.3g}, error magnification factor = {:.3g}, condition number = {:.3g}'.format(*problem_2_3_2_solver(6))) # (b) n = 10 print('(b) n = 10, forward error = {:.3g}, error magnification factor = {:.3g}, condition number = {:.3g}'.format(*problem_2_3_2_solver(10))) """ Explanation: Carry out Computer Problem 1 for the matrix with entries $A_{ij} = 1/(|i - j| + 1)$. End of explanation """ def problem_2_3_3_solver(n): return problem_2_3_1_generic_solver(n, lambda i, j : np.abs(i - j) + 1) # n = 100 print('n = 100, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_3_solver(100))) # n = 200 print('n = 200, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_3_solver(200))) # n = 300 print('n = 300, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_3_solver(300))) # n = 400 print('n = 400, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_3_solver(400))) # n = 500 print('n = 500, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_3_solver(500))) """ Explanation: Let A be the n x n matrix with entries $A_{ij} = |i - j| + 1$. Define $x = [1,\cdots,1]^T$ and $b = Ax$. For n = 100,200,300,400, and 500, use the MATLAB program from Computer Problem 2.1.1 or MATLAB’s backslash command to compute $x_c$, the double precision computed solution. Calculate the infinity norm of the forward error for each solution. Find the five error magnification factors of the problems $Ax = b$, and compare with the corresponding condition numbers. End of explanation """ def problem_2_3_4_solver(n): return problem_2_3_1_generic_solver(n, lambda i, j : np.sqrt(np.power(i - j, 2) + n / 10)) # n = 100 print('n = 100, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_4_solver(100))) # n = 200 print('n = 200, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_4_solver(200))) # n = 300 print('n = 300, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_4_solver(300))) # n = 400 print('n = 400, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_4_solver(400))) # n = 500 print('n = 500, forward error = {:.2g}, error magnification factor = {:.2g}, condition number = {:.2g}'.format(*problem_2_3_4_solver(500))) """ Explanation: Carry out the steps of Computer Problem 3 for the matrix with entries $A_{ij} = \sqrt{(i - j)^2 + n / 10}$. End of explanation """ print('n = 11, forward error = {:.3g}, error magnification factor = {:.3g}, condition number = {:.3g}'.format(*problem_2_3_1_solver(11))) """ Explanation: For what values of n does the solution in Computer Problem 1 have no correct significant digits? End of explanation """ A = np.array([1, -1, 3, -1, 0, -2, 2, 2, 4]).reshape(3, 3) b = np.array([-3, 1, 0]) lu, piv = linalg.lu_factor(A) x = linalg.lu_solve([lu, piv], b) print(x) """ Explanation: 2.4 The PA=LU Factorization Example Apply Gaussian elimination with partial pivoting to solve the system \begin{matrix} x_1 - x_2 + 3x_3 = -3 & \ -1x_1 - 2x_3 = 1 & \ 2x_1 + 2x_2 + 4x_3 = 0 \end{matrix} End of explanation """ """ [[2, 3] [3, 2]] """ A = np.array([2, 3, 3, 2]).reshape(2, 2) b = np.array([4, 1]) lu, piv = linalg.lu_factor(A) x = linalg.lu_solve([lu, piv], b) print(x) """ Explanation: Example Solve the system $2x_1 + 3x_2 = 4$,$3x_1 + 2x_2 = 1$ using the PA = LU factorization with partial pivoting End of explanation """ def jacobi_method(A, b, x0, k): """ Use jacobi method to solve equations Args: A (numpy 2d array): the matrix b (numpy 1d array): the right hand side vector x0 (numpy 1d array): initial guess k (real number): iterations Return: The approximate solution Exceptions: ValueError The size of matrix's column is not equal to the size of vector's size """ if A.shape[1] is not x0.shape[0] : raise ValueError('The size of the columns of matrix A must be equal to the size of the x0') D = np.diag(A.diagonal()) inv_D = linalg.inv(D) LU = A - D xk = x0 for _ in range(k): xk = np.matmul(b - np.matmul(LU, xk), inv_D) return xk """ Explanation: 2.5 Iterative Methods Jacobi Method End of explanation """ A = np.array([3, 1, 1, 2]).reshape(2, 2) b = np.array([5, 5]) x = jacobi_method(A, b, np.array([0, 0]), 20) print('x = %s' %x) """ Explanation: Example Apply the Jacobi Method to the system $3u + v = 5$, $u + 2v = 5$ End of explanation """ def gauss_seidel_method(A, b, x0, k): """ Use gauss seidel method to solve equations Args: A (numpy 2d array): the matrix b (numpy 1d array): the right hand side vector x0 (numpy 1d array): initial guess k (real number): iterations Return: The approximate solution Exceptions: ValueError The size of matrix's column is not equal to the size of vector's size """ if A.shape[1] is not x0.shape[0] : raise ValueError('The size of the columns of matrix A must be equal to the size of the x0') D = np.diag(A.diagonal()) L = np.tril(A) - D U = np.triu(A) - D inv_LD = linalg.inv(L + D) xk = x0 for _ in range(k): xk = np.matmul(inv_LD, -np.matmul(U, xk) + b) return xk """ Explanation: Gauss-Seidel Method End of explanation """ A = np.array([3, 1, -1, 2, 4, 1, -1, 2, 5]).reshape(3, 3) b = np.array([4, 1, 1]) x0 = np.array([0, 0, 0]) gauss_seidel_method(A, b, x0, 24) """ Explanation: Example Apply the Gauss-Seidel Method to the system $$ \begin{bmatrix} 3 & 1 & -1 \ 2 & 4 & 1 \ -1 & 2 & 5 \end{bmatrix} \begin{bmatrix} u \ v \ w \end{bmatrix} = \begin{bmatrix} 4 \ 1 \ 1 \end{bmatrix} $$ End of explanation """ def gauss_seidel_sor_method(A, b, w, x0, k): """ Use gauss seidel method with sor to solve equations Args: A (numpy 2d array): the matrix b (numpy 1d array): the right hand side vector w (real number): weight x0 (numpy 1d array): initial guess k (real number): iterations Return: The approximate solution Exceptions: ValueError The size of matrix's column is not equal to the size of vector's size """ if A.shape[1] is not x0.shape[0] : raise ValueError('The size of the columns of matrix A must be equal to the size of the x0') D = np.diag(A.diagonal()) L = np.tril(A) - D U = np.triu(A) - D inv_LD = linalg.inv(w * L + D) xk = x0 for _ in range(k): xk = np.matmul(w * inv_LD, b) + np.matmul(inv_LD, (1 - w) * np.matmul(D, xk) - w * np.matmul(U, xk)) return xk """ Explanation: Successive Over-Relaxation End of explanation """ A = np.array([3, 1, -1, 2, 4, 1, -1, 2, 5]).reshape(3, 3) b = np.array([4, 1, 1]) x0 = np.array([0, 0, 0]) w = 1.25 gauss_seidel_sor_method(A, b, w, x0, 14) """ Explanation: Example Apply the Gauss-Seidel Method with sor to the system $$ \begin{bmatrix} 3 & 1 & -1 \ 2 & 4 & 1 \ -1 & 2 & 5 \end{bmatrix} \begin{bmatrix} u \ v \ w \end{bmatrix} = \begin{bmatrix} 4 \ 1 \ 1 \end{bmatrix} $$ End of explanation """ A = np.array([4, -2, 2, -2, 2, -4, 2, -4, 11]).reshape(3, 3) R = linalg.cholesky(A) print(R) """ Explanation: 2.6 Methods for symmetric positive-definite matrices Cholesky factorization Example Find the Cholesky factorization of $\begin{bmatrix} 4 & -2 & 2 \ -2 & 2 & -4 \ 2 & -4 & 11 \end{bmatrix}$ End of explanation """ def conjugate_gradient_method(A, b, x0, k): """ Use conjugate gradient to solve linear equations Args: A : input matrix b : input right hand side vector x0 : initial guess k : iteration Returns: approximate solution """ xk = x0 dk = rk = b - np.matmul(A, x0) for _ in range(k): if not np.any(rk) or all( abs(i) <= 1e-16 for i in rk) is True: break ak = float(np.matmul(rk.T, rk)) / float(np.matmul(dk.T, np.matmul(A, dk))) xk = xk + ak * dk rk1 = rk - ak * np.matmul(A, dk) bk = np.matmul(rk1.T, rk1) / np.matmul(rk.T, rk) dk = rk1 + bk * dk rk = rk1 return xk """ Explanation: Conjugate Gradient Method End of explanation """ A = np.array([2, 2, 2, 5]).reshape(2, 2) b = np.array([6, 3]) x0 = np.array([0, 0]) conjugate_gradient_method(A, b, x0, 2) """ Explanation: Example Solve $$ \begin{bmatrix} 2 & 2 \ 2 & 5 \ \end{bmatrix} \begin{bmatrix} u \ v \end{bmatrix} = \begin{bmatrix} 6 \ 3 \end{bmatrix} $$ using the Conjugate Gradient Method End of explanation """ A = np.array([1, -1, 0, -1, 2, 1, 0, 1, 2]).reshape(3, 3) b = np.array([0, 2, 3]) x0 = np.array([0, 0, 0]) conjugate_gradient_method(A, b, x0, 10) """ Explanation: Example Solve $$ \begin{bmatrix} 1 & -1 & 0 \ -1 & 2 & 1 \ 0 & 1 & 2 \ \end{bmatrix} \begin{bmatrix} u \ v \ w \ \end{bmatrix} = \begin{bmatrix} 0 \ 2 \ 3 \ \end{bmatrix} $$ End of explanation """ A = np.array([1, -1, 0, -1, 2, 1, 0, 1, 5]).reshape(3, 3) b = np.array([3, -3, 4]) x0 = np.array([0, 0, 0]) x = slinalg.cg(A, b, x0)[0] print('x = %s' %x ) """ Explanation: Example Solve $$ \begin{bmatrix} 1 & -1 & 0 \ -1 & 2 & 1 \ 0 & 1 & 5 \ \end{bmatrix} \begin{bmatrix} u \ v \ w \ \end{bmatrix} = \begin{bmatrix} 3 \ -3 \ 4 \ \end{bmatrix} $$ End of explanation """ def multivariate_newton_method(fA, fDA, x0, k): """ Args: fA (function handle) : coefficient matrix with arguments fDA (function handle) : right-hand side vector with arguments x0 (numpy 2d array) : initial guess k (real number) : iteration Return: Approximate solution xk after k iterations """ xk = x0 for _ in range(k): lu, piv = linalg.lu_factor(fDA(*xk)) s = linalg.lu_solve([lu, piv], -fA(*xk)) xk = xk + s return xk """ Explanation: Preconditioning 2.7 Nonlinear Systems Of Equations Multivariate Newton's Method End of explanation """ fA = lambda u,v : np.array([v - pow(u, 3), pow(u, 2) + pow(v, 2) - 1], dtype=np.float64) fDA = lambda u,v : np.array([-3 * pow(u, 2), 1, 2 * u, 2 * v], dtype=np.float64).reshape(2, 2) x0 = np.array([1, 2]) multivariate_newton_method(fA, fDA, x0, 10) """ Explanation: Example Use Newton's method with starting guess $(1,2)$ to find a solution of the system $$ v - u^3 = 0 \ u^2 + v^2 - 1 = 0 $$ End of explanation """ fA = lambda u,v : np.array([6 * pow(u, 3) + u * v - 3 * pow(v, 3) - 4, pow(u, 2) - 18 * u * pow(v, 2) + 16 * pow(v, 3) + 1], dtype=np.float64) fDA = lambda u,v : np.array([18 * pow(u, 2) + v, u - 9 * pow(v, 2), 2 * u - 18 * pow(v, 2), -36 * u * v + 48 * pow(v, 2)], dtype=np.float64).reshape(2, 2) x0 = np.array([2, 2], dtype=np.float64) multivariate_newton_method(fA, fDA, x0, 5) """ Explanation: Example Use Newton's method to find the solutions of the system $$ f_1(u,v) = 6u^3 + uv - 3^3 - 4 = 0 \ f_2(u,v) = u^2 - 18uv^2 + 16v^3 + 1 = 0 $$ End of explanation """
gaufung/Data_Analytics_Learning_Note
Scikit_Learning/User_Guide/Generalized_Linear_Models.ipynb
mit
from sklearn import linear_model reg = linear_model.LinearRegression() reg.fit([[0, 0], [1, 2], [2,2]], [0, 1, 2]) reg.coef_ """ Explanation: Generailized Linear Models In mathematical notion. $$\hat{y}(\omega, x)=\omega_0 + \omega_1x_1 + \ldots +\omega_px_p$$ We designate the vector $\omega=(\omega_1,\ldots,\omega_p)$ as conef_ and $\omega_0$ as intercept_ 1 Ordinary Least Squares $$\underset{\omega}{min}={\rvert \rvert X \omega -y \rvert \rvert}_{2}^2$$ End of explanation """ from sklearn import linear_model reg = linear_model.Ridge(alpha=0.5) reg.fit([[0,0],[0,0],[1, 1]], [0, .1, 1]) reg.coef_ reg.intercept_ """ Explanation: 2 Ridge Regression The ridge regression minimize a penalized residual sum of the squares $$\underset{min}{\omega}{\rvert \rvert X \omega -y \rvert \rvert}_{2}^2 + \alpha {\rvert \rvert \omega \rvert \rvert}_2^2$$ The lager the value of $\alpha$, the greater the amount of shrinkage and the thus the coefficients become more robust to collinearity End of explanation """ import numpy as np import matplotlib.pyplot as plt from sklearn import linear_model X = 1. / (np.arange(1, 11) + np.arange(0, 10)[:, np.newaxis]) y = np.ones(10) n_alphas = 200 alphas = np.logspace(-10, -2, n_alphas) clf = linear_model.Ridge(fit_intercept=False) coefs = [] for a in alphas: clf.set_params(alpha=a) clf.fit(X, y) coefs.append(clf.coef_) # display ax = plt.gca() ax.plot(alphas, coefs) ax.set_xscale('log') ax.set_xlim(ax.get_xlim()[::-1]) plt.xlabel('alpha') plt.ylabel('weights') plt.title('Ridge coefficients as a function of the regularization') plt.axis('tight') plt.show() """ Explanation: 2.1 example End of explanation """ from sklearn import linear_model reg = linear_model.RidgeCV(alphas=[0.1, 1.0, 10]) reg.fit([[0,0], [0,0],[1, 1]], [0, .1, 1]) reg.alpha_ """ Explanation: 2.2 generialized cross-validation End of explanation """ from sklearn import linear_model reg = linear_model.Lasso(alpha=0.1) reg.fit([[0,0], [1, 1]], [0, 1]) reg.predict([[1,1]]) """ Explanation: 3 Lasso It is useful in some context due to its tendency to prefer solutions with fewer parameters values, effectively reducing the number of variables upon which the given solution is dependent. $$\underset{\omega}{min}\frac{1}{2n_{samples}}{\rvert}X\omega -y\rvert_2^{2} + \alpha{\rvert}w{\rvert}_1$$ End of explanation """ from datetime import datetime import numpy as np import matplotlib.pyplot as plt from sklearn import linear_model from sklearn import datasets from sklearn.svm import l1_min_c iris = datasets.load_iris() X = iris.data y= iris.target X = X[y!=2] y = y[y!=2] X -= np.mean(X, 0) cs =l1_min_c(X, y, loss='log')*np.logspace(0, 3) print('computing regularzation path ...') start = datetime.now() clf = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6) coef_ = [] for c in cs: clf.set_params(C=c) clf.fit(X, y) coef_.append(clf.coef_.ravel().copy()) print('this took', datetime.now() - start) coefs_ = np.array(coef_) plt.plot(np.log10(cs), coefs_) #ymin, ymax = plg.ylim() plt.show() """ Explanation: 4 Logistic regression As an optimization problem, binary class L2 penalized logistic regression minimizes the following cost function. $$\underset{\omega, c}{min}\frac{1}{2}\omega^T\omega + C\sum_{i=1}^{n}log\bigg(exp\big(-y_i(X_i^T\omega+c)\big)+1\bigg)$$ End of explanation """ import numpy as np import matplotlib.pyplot as plt from sklearn import linear_model, datasets n_samples = 1000 n_outliers = 50 X, y, coef = datasets.make_regression(n_samples=n_samples, n_features= 1, n_informative=1, noise=0, coef=True, random_state=0) # Add outlier data np.random.seed(0) X[:n_outliers] = 3 + 0.5 * np.random.normal(size=(n_outliers, 1)) y[:n_outliers] = -3 + 10 * np.random.normal(size=n_outliers) #fit line using all data model = linear_model.LinearRegression() model.fit(X,y) #robustly fit linear model with ransac algorithm model_ransac = linear_model.RANSACRegressor(linear_model.LinearRegression()) model_ransac.fit(X,y) inliner_mask = model_ransac.inlier_mask_ outlier_mask = np.logical_not(inliner_mask) #predict data of the estimated models line_X = np.arange(-5, 5) line_y = model.predict(line_X[:,np.newaxis]) line_y_ransac = model_ransac.predict(line_X[:, np.newaxis]) #compare estimated coefficient print('Estimated coefficients(true, normal, ransac)') print(coef, model.coef_, model_ransac.estimator_.coef_) lw = 2 plt.scatter(X[inliner_mask], y[inliner_mask], color='yellowgreen', marker='.', label='Inliers') plt.scatter(X[outlier_mask], y[outlier_mask], color='gold', marker='.', label='Outliers') plt.plot(line_X, line_y, color='navy', linestyle='-', linewidth=lw, label='Linear regresor') plt.plot(line_X, line_y_ransac, color='cornflowerblue', linewidth=lw, label='Ransac regressor') plt.legend(loc='lower right') plt.show() """ Explanation: 5 Robustness regression Dealing the presence of corrupt data: either outliers, or error in the model. The Scikit-learn provides 3 robust regression estimators: + RANSAC + Theil Sen + HuberRegressor RANSA: (RANdom SAmple Consensus) wiki End of explanation """ from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.pipeline import Pipeline model = Pipeline([('poly', PolynomialFeatures(degree=3)), ('linear', LinearRegression(fit_intercept=False))]) x = np.arange(5) y = 3-2*x + x**2-x**3 model = model.fit(x[:,np.newaxis], y) model.named_steps['linear'].coef_ """ Explanation: 6 polynomial regression If we want to fit a paraboloid to the data instead of a plane, we can combine the features in second-order polynomials, so that the model look ilis this: $$\hat{y}(\omega, x)=\omega_0+\omega_1x_1+\omega_2x_2+\omega_3x_1x_2+\omega_4x_1^2+\omega_5x_2^2$$ End of explanation """
kylemede/DS-ML-sandbox
KaggelChallenges/titanic/explore.ipynb
gpl-3.0
import pandas as pd from pandas import Series, DataFrame import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns sns.set_style("whitegrid") # machine learning from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC, LinearSVC from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.cross_validation import cross_val_score from sklearn import cross_validation from sklearn.metrics import accuracy_score from sklearn.cross_validation import train_test_split from sklearn.feature_selection import SelectFromModel from sklearn.grid_search import GridSearchCV from sklearn.preprocessing import LabelEncoder from sklearn.pipeline import make_pipeline from sklearn.feature_selection import SelectKBest from sklearn.cross_validation import StratifiedKFold from sklearn.ensemble.gradient_boosting import GradientBoostingClassifier from sklearn.ensemble import ExtraTreesClassifier import xgboost as xgb from xgboost import plot_importance train_df = pd.read_csv("train.csv",dtype={"Age":np.float64},) #train_df.head() # find how many ages train_df['Age'].count() # how many ages are NaN? train_df['Age'].isnull().sum() # plot ages of training data set, with NaN's removed if False: train_df['Age'].dropna().astype(int).hist(bins=70) print 'Mean age = ',train_df['Age'].dropna().astype(int).mean() """ Explanation: My playing with the Kaggle titanic challenge. I COPPIED THE INITIAL CODE and got lots of the ideas for this first Kaggle advanture from here. I will later compact the important stuff from here into a kernal on my Kaggle account. End of explanation """ #train_df['Embarked'].head() #train_df.info() train_df['Embarked'].isnull().sum() train_df["Embarked"].count() if False: sns.countplot(x="Embarked",data=train_df) if False: sns.countplot(x='Survived',hue='Embarked',data=train_df,order=[0,1]) """ Explanation: Let's see where they got on End of explanation """ if False: embark_survive_perc = train_df[["Embarked", "Survived"]].groupby(['Embarked'],as_index=False).mean() sns.barplot(x='Embarked', y='Survived', data=embark_survive_perc,order=['S','C','Q']) """ Explanation: OK, so clearly there were more people who got on at S, and it seems their survival is disproportional. Let's check that. End of explanation """ if False: train_df['Fare'].astype(int).plot(kind='hist',bins=100, xlim=(0,50)) # get fare for survived & didn't survive passengers if False: fare_not_survived = train_df["Fare"].astype(int)[train_df["Survived"] == 0] fare_survived = train_df["Fare"].astype(int)[train_df["Survived"] == 1] # get average and std for fare of survived/not survived passengers avgerage_fare = DataFrame([fare_not_survived.mean(), fare_survived.mean()]) std_fare = DataFrame([fare_not_survived.std(), fare_survived.std()]) avgerage_fare.index.names = std_fare.index.names = ["Survived"] avgerage_fare.plot(yerr=std_fare,kind='bar',legend=False) """ Explanation: Interesting, actually those from C had higher rate of survival. So, knowing more people from your home town didn't help. Next, did how much they paid have an effect? End of explanation """ import scipy.stats as stats # column 'Age' has some NaN values # A simple approximation of the distribution of ages is a gaussian, but this is not commonly accurate. # lets make a vector of random ages centered on the mean, with a width of the std lower, upper = train_df['Age'].min(), train_df['Age'].max() mu, sigma = train_df["Age"].mean(), train_df["Age"].std() # number of rows n = train_df.shape[0] print 'max: ',train_df['Age'].max() print 'min: ',train_df['Age'].min() # vector of random values using the truncated normal distribution. X = stats.truncnorm((lower - mu) / sigma, (upper - mu) / sigma, loc=mu, scale=sigma) rands = X.rvs(n) # get the indexes of the elements in the original array that are NaN idx = np.isfinite(train_df['Age']) # use the indexes to replace the NON-NaNs in the random array with the good values from the original array rands[idx.values] = train_df[idx]['Age'].values ## At this point rands is now the cleaned column of data we wanted, so push it in to the original df train_df['Age'] = rands """ ## we will make a new column with Nan's replaced, then push that into the original df n = train_df.shape[0] # number of rows #randy = np.random.randint(average_age_train - std_age_train, average_age_train + std_age_train, size = n) # draw from a gaussian instead of simple uniform # note this uses a 'standard gauss' and that tneeds to have its var and mean shifted randy = np.random.randn(n)*std_age_train + average_age_train idx = np.isfinite(train_df['Age']) # gives a boolean index for the NaNs in the df's column randy[idx.values] = train_df[idx]['Age'].values ## idexing the values of randy with this #now have updated column, next push into original df train_df['Age'] = randy """ print 'After this gaussian replacment, there are: ',train_df['Age'].isnull().sum() print 'max: ',train_df['Age'].max() print 'min: ',train_df['Age'].min() # plot new Age Values if False: train_df['Age'].hist(bins=70) # Compare this to that from a few cells up for the raw ages with the NaN's dropped. Not much different actually. """ Explanation: Before digging into how the ages factor in, let's take the advice of others and replace NaN's with random values End of explanation """ ## let's pull in the test data test_df = pd.read_csv("test.csv",dtype={"Age":np.float64},) #test_df.head() #### Do the same for the test data # column 'Age' has some NaN values # A simple approximation of the distribution of ages is a gaussian, but this is not commonly accurate. # lets make a vector of random ages centered on the mean, with a width of the std lower, upper = test_df['Age'].min(), test_df['Age'].max() mu, sigma = test_df["Age"].mean(), test_df["Age"].std() # number of rows n = test_df.shape[0] print 'max: ',test_df['Age'].max() print 'min: ',test_df['Age'].min() # vector of random values using the truncated normal distribution. X = stats.truncnorm((lower - mu) / sigma, (upper - mu) / sigma, loc=mu, scale=sigma) rands = X.rvs(n) # get the indexes of the elements in the original array that are NaN idx = np.isfinite(test_df['Age']) # use the indexes to replace the NON-NaNs in the random array with the good values from the original array rands[idx.values] = test_df[idx]['Age'].values ## At this point rands is now the cleaned column of data we wanted, so push it in to the original df test_df['Age'] = rands #test_df['Age'].hist(bins=70) ## Let's make a couple nice plots of survival vs age # peaks for survived/not survived passengers by their age if False: facet = sns.FacetGrid(train_df, hue="Survived",aspect=4) #facet.map(sns.kdeplot,'Age',shade= True) # This keeps crashing the kernal, but I don't know why!!!!!!!!!! facet.set(xlim=(0, train_df['Age'].astype(int).max())) facet.add_legend() # average survived passengers by age if False: fig, axis1 = plt.subplots(1,1,figsize=(18,4)) average_age = train_df[["Age", "Survived"]].groupby(['Age'],as_index=False).mean() sns.barplot(x='Age', y='Survived', data=average_age) print 'max: ',train_df['Age'].astype(int).max() print 'min: ',train_df['Age'].astype(int).min() # Cabin if False: # It has a lot of NaN values, so it won't cause a remarkable impact on prediction train_df.drop("Cabin",axis=1,inplace=True) test_df.drop("Cabin",axis=1,inplace=True) ## OR convert NaNs to 'U' meaning 'Unknown' and map all to new columns if True: # Code based on that here: http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html # replacing missing cabins with U (for Uknown) train_df.Cabin.fillna('U',inplace=True) # mapping each Cabin value with the cabin letter train_df['Cabin'] = train_df['Cabin'].map(lambda c : c[0]) # dummy encoding ... cabin_dummies = pd.get_dummies(train_df['Cabin'],prefix='Cabin') train_df = pd.concat([train_df,cabin_dummies],axis=1) train_df.drop('Cabin',axis=1,inplace=True) # replacing missing cabins with U (for Uknown) test_df.Cabin.fillna('U',inplace=True) # mapping each Cabin value with the cabin letter test_df['Cabin'] = test_df['Cabin'].map(lambda c : c[0]) # dummy encoding ... cabin_dummies = pd.get_dummies(test_df['Cabin'],prefix='Cabin') test_df = pd.concat([test_df,cabin_dummies],axis=1) test_df.drop('Cabin',axis=1,inplace=True) #train_df.head() #test_df.head() #train_df.head() """ Explanation: lets perform the same NaN replacement for the 'Age' with the test data as well End of explanation """ # Family # Code based on that here: http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html # introducing a new feature : the size of families (including the passenger) train_df['FamilySize'] = train_df['Parch'] + train_df['SibSp'] + 1 # introducing other features based on the family size train_df['Singleton'] = train_df['FamilySize'].map(lambda s : 1 if s == 1 else 0) train_df['SmallFamily'] = train_df['FamilySize'].map(lambda s : 1 if 2<=s<=4 else 0) train_df['LargeFamily'] = train_df['FamilySize'].map(lambda s : 1 if 5<=s else 0) # Code based on that here: http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html # introducing a new feature : the size of families (including the passenger) test_df['FamilySize'] = test_df['Parch'] + test_df['SibSp'] + 1 # introducing other features based on the family size test_df['Singleton'] = test_df['FamilySize'].map(lambda s : 1 if s == 1 else 0) test_df['SmallFamily'] = test_df['FamilySize'].map(lambda s : 1 if 2<=s<=4 else 0) test_df['LargeFamily'] = test_df['FamilySize'].map(lambda s : 1 if 5<=s else 0) if False: # Instead of having two columns Parch & SibSp, # we can have only one column represent if the passenger had any family member aboard or not, # Meaning, if having any family member(whether parent, brother, ...etc) will increase chances of Survival or not. train_df['Family'] = train_df["Parch"] + train_df["SibSp"] train_df['Family'].loc[train_df['Family'] > 0] = 1 train_df['Family'].loc[train_df['Family'] == 0] = 0 test_df['Family'] = test_df["Parch"] + test_df["SibSp"] test_df['Family'].loc[test_df['Family'] > 0] = 1 test_df['Family'].loc[test_df['Family'] == 0] = 0 # drop Parch & SibSp train_df = train_df.drop(['SibSp','Parch'], axis=1) test_df = test_df.drop(['SibSp','Parch'], axis=1) # plot if False: fig, (axis1,axis2) = plt.subplots(1,2,sharex=True,figsize=(10,5)) # sns.factorplot('Family',data=train_df,kind='count',ax=axis1) sns.countplot(x='Family', data=train_df, order=[1,0], ax=axis1) # average of survived for those who had/didn't have any family member family_perc = train_df[["Family", "Survived"]].groupby(['Family'],as_index=False).mean() sns.barplot(x='Family', y='Survived', data=family_perc, order=[1,0], ax=axis2) axis1.set_xticklabels(["With Family","Alone"], rotation=0) # Sex # As we see, children(age < ~16) on aboard seem to have a high chances for Survival. # So, we can classify passengers as males, females, and child def get_person(passenger): age,sex = passenger return 'child' if age < 16 else sex train_df['Person'] = train_df[['Age','Sex']].apply(get_person,axis=1) test_df['Person'] = test_df[['Age','Sex']].apply(get_person,axis=1) # No need to use Sex column since we created Person column train_df.drop(['Sex'],axis=1,inplace=True) test_df.drop(['Sex'],axis=1,inplace=True) # create dummy variables for Person column person_dummies_titanic = pd.get_dummies(train_df['Person']) person_dummies_titanic.columns = ['Child','Female','Male'] #person_dummies_titanic.drop(['Male'], axis=1, inplace=True) person_dummies_test = pd.get_dummies(test_df['Person']) person_dummies_test.columns = ['Child','Female','Male'] #person_dummies_test.drop(['Male'], axis=1, inplace=True) train_df = train_df.join(person_dummies_titanic) test_df = test_df.join(person_dummies_test) if False: fig, (axis1,axis2) = plt.subplots(1,2,figsize=(10,5)) # sns.factorplot('Person',data=train_df,kind='count',ax=axis1) sns.countplot(x='Person', data=train_df, ax=axis1) # average of survived for each Person(male, female, or child) person_perc = train_df[["Person", "Survived"]].groupby(['Person'],as_index=False).mean() sns.barplot(x='Person', y='Survived', data=person_perc, ax=axis2, order=['male','female','child']) train_df.drop(['Person'],axis=1,inplace=True) test_df.drop(['Person'],axis=1,inplace=True) """ Explanation: This function introduces 4 new features: FamilySize : the total number of relatives including the passenger (him/her)self. Sigleton : a boolean variable that describes families of size = 1 SmallFamily : a boolean variable that describes families of 2 <= size <= 4 LargeFamily : a boolean variable that describes families of 5 < size End of explanation """ # Pclass # sns.factorplot('Pclass',data=titanic_df,kind='count',order=[1,2,3]) if False: sns.factorplot('Pclass','Survived',order=[1,2,3], data=train_df,size=5) # create dummy variables for Pclass column pclass_dummies_titanic = pd.get_dummies(train_df['Pclass']) pclass_dummies_titanic.columns = ['Class_1','Class_2','Class_3'] #pclass_dummies_titanic.drop(['Class_3'], axis=1, inplace=True) pclass_dummies_test = pd.get_dummies(test_df['Pclass']) pclass_dummies_test.columns = ['Class_1','Class_2','Class_3'] #pclass_dummies_test.drop(['Class_3'], axis=1, inplace=True) train_df.drop(['Pclass'],axis=1,inplace=True) test_df.drop(['Pclass'],axis=1,inplace=True) train_df = train_df.join(pclass_dummies_titanic) test_df = test_df.join(pclass_dummies_test) # Ticket # Code based on that here: http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html # a function that extracts each prefix of the ticket, returns 'XXX' if no prefix (i.e the ticket is a digit) def cleanTicket(ticket): ticket = ticket.replace('.','') ticket = ticket.replace('/','') ticket = ticket.split() ticket = map(lambda t : t.strip() , ticket) ticket = list(filter(lambda t : not t.isdigit(), ticket)) if len(ticket) > 0: return ticket[0] else: return 'XXX' train_df['Ticket'] = train_df['Ticket'].map(cleanTicket) tickets_dummies = pd.get_dummies(train_df['Ticket'],prefix='Ticket') train_df = pd.concat([train_df, tickets_dummies],axis=1) train_df.drop('Ticket',inplace=True,axis=1) test_df['Ticket'] = test_df['Ticket'].map(cleanTicket) tickets_dummies = pd.get_dummies(test_df['Ticket'],prefix='Ticket') test_df = pd.concat([test_df, tickets_dummies],axis=1) test_df.drop('Ticket',inplace=True,axis=1) train_df.head() # Title # a map of more aggregated titles Title_Dictionary = { "Capt": "Officer", "Col": "Officer", "Major": "Officer", "Jonkheer": "Royalty", "Don": "Royalty", "Sir" : "Royalty", "Dr": "Officer", "Rev": "Officer", "the Countess":"Royalty", "Dona": "Royalty", "Mme": "Mrs", "Mlle": "Miss", "Ms": "Mrs", "Mr" : "Mr", "Mrs" : "Mrs", "Miss" : "Miss", "Master" : "Master", "Lady" : "Royalty" } # we extract the title from each name train_df['Title'] = train_df['Name'].map(lambda name:name.split(',')[1].split('.')[0].strip()) # we map each title train_df['Title'] = train_df.Title.map(Title_Dictionary) #train_df.head() # we extract the title from each name test_df['Title'] = test_df['Name'].map(lambda name:name.split(',')[1].split('.')[0].strip()) # we map each title test_df['Title'] = test_df.Title.map(Title_Dictionary) #test_df.head() # encoding in dummy variable titles_dummies = pd.get_dummies(train_df['Title'],prefix='Title') train_df = pd.concat([train_df,titles_dummies],axis=1) titles_dummies = pd.get_dummies(test_df['Title'],prefix='Title') test_df = pd.concat([test_df,titles_dummies],axis=1) # removing the title variable train_df.drop('Title',axis=1,inplace=True) test_df.drop('Title',axis=1,inplace=True) # Convert categorical column values to ordinal for model fitting if False: le_title = LabelEncoder() # To convert to ordinal: train_df.Title = le_title.fit_transform(train_df.Title) test_df.Title = le_title.fit_transform(test_df.Title) # To convert back to categorical: #train_df.Title = le_title.inverse_transform(train_df.Title) #train_df.head() #test_df.head() """ Explanation: Not surprising, woman and children had higher survival rates. End of explanation """ #train_df.drop(['Embarked'], axis=1,inplace=True) #test_df.drop(['Embarked'], axis=1,inplace=True) # only for test_df, since there is a missing "Fare" values # could use mean or median here. test_df["Fare"].fillna(test_df["Fare"].mean(), inplace=True) train_df.drop(['Name'], axis=1,inplace=True) test_df.drop(['Name'], axis=1,inplace=True) #train_df.drop(['Ticket'], axis=1,inplace=True) #test_df.drop(['Ticket'], axis=1,inplace=True) #train_df.drop(['PassengerId'], axis=1,inplace=True) #test_df.drop(['PassengerId'], axis=1,inplace=True) # only in titanic_df, fill the two missing values with the most occurred value, which is "S". train_df["Embarked"] = train_df["Embarked"].fillna("S") # Either to consider Embarked column in predictions, # and remove "S" dummy variable, # and leave "C" & "Q", since they seem to have a good rate for Survival. # OR, don't create dummy variables for Embarked column, just drop it, # because logically, Embarked doesn't seem to be useful in prediction. embark_dummies_train = pd.get_dummies(train_df['Embarked']) #embark_dummies_train.drop(['S'], axis=1, inplace=True) embark_dummies_test = pd.get_dummies(test_df['Embarked']) #embark_dummies_test.drop(['S'], axis=1, inplace=True) train_df = train_df.join(embark_dummies_train) test_df = test_df.join(embark_dummies_test) train_df.drop(['Embarked'], axis=1,inplace=True) test_df.drop(['Embarked'], axis=1,inplace=True) """ Explanation: Also unsurprising. The higher the booking class, then higher the chances to survive. Now lets get to actually training and building a model to make predictions with! problems with the raw data a couple NaNs in 'Embarked', so drop column 'Name' strings can't be converted to anything useful, so drop column replace NaNs in 'Fare' with median 'Ticket' can't be converted to anything useful, so drop column 'PassengerID' has no importance, so drop column End of explanation """ ## Scale all features except passengerID features = list(train_df.columns) features.remove('PassengerId') train_df[features] = train_df[features].apply(lambda x: x/x.max(), axis=0) features = list(test_df.columns) features.remove('PassengerId') test_df[features] = test_df[features].apply(lambda x: x/x.max(), axis=0) train_df.head() test_df.head() """ Explanation: The names are also pointless, so drop them too End of explanation """ ## Remove extra columns in training DF that are not in test DF train_cs = list(train_df.columns) train_cs.remove('Survived') test_cs = list(test_df.columns) for c in train_cs: if c not in test_cs: print repr(c)+' not in test columns, so removing it from training df' train_df.drop([c], axis=1,inplace=True) for c in test_cs: if c not in train_cs: print repr(c)+' not in training columns, so removing it from test df' test_df.drop([c], axis=1,inplace=True) if False: print '\nFor train_df:' for column in train_df: print "# Nans in column '"+column+"' are: "+str(train_df[column].isnull().sum()) print 'min: ',train_df[column].min() print 'max: ',train_df[column].max() print '\nFor test_df:' for column in test_df: print "# Nans in column '"+column+"' are: "+str(test_df[column].isnull().sum()) print 'min: ',test_df[column].min() print 'max: ',test_df[column].max() # define training and testing sets X_train = train_df.drop("Survived",axis=1) Y_train = train_df["Survived"] X_test = test_df.copy() X_train.head() X_test.head() """ Explanation: match up dataframe columns, by removing extras not in the test set. End of explanation """ clf = ExtraTreesClassifier(n_estimators=200) clf = clf.fit(X_train, Y_train) features = pd.DataFrame() features['feature'] = X_train.columns features['importance'] = clf.feature_importances_ features.sort(['importance'],ascending=False) """ Explanation: Feature Selection End of explanation """ model = SelectFromModel(clf, prefit=True) X_train_new = model.transform(X_train) X_train_new.shape X_test_new = model.transform(X_test) X_test_new.shape # Logistic Regression logreg = LogisticRegression() logreg.fit(X_train_new, Y_train) Y_pred = logreg.predict(X_test_new) print('standard score ', logreg.score(X_train_new, Y_train)) print('cv score ',np.mean(cross_val_score(logreg, X_train_new, Y_train, cv=10))) # Support Vector Machines svc = SVC() svc.fit(X_train_new, Y_train) Y_pred = svc.predict(X_test_new) #svc.score(X_train, Y_train) print('standard score ', svc.score(X_train_new, Y_train)) print('cv score ',np.mean(cross_val_score(svc, X_train_new, Y_train, cv=10))) # Random Forests random_forest = RandomForestClassifier(n_estimators=300) random_forest.fit(X_train_new, Y_train) Y_pred = random_forest.predict(X_test_new) print('standard score ', random_forest.score(X_train_new, Y_train)) print('cv score ',np.mean(cross_val_score(random_forest, X_train_new, Y_train, cv=10))) acc = [] mx_v = 0 mx_e = 0 ests = range(10,500,10) if False: for est in ests: random_forest = RandomForestClassifier(n_estimators=est) random_forest.fit(X_train_new, Y_train) Y_pred = random_forest.predict(X_test_new) #predictions = model.predict(X_test) #accuracy = accuracy_score(y_test, predictions) accuracy = np.mean(cross_val_score(random_forest, X_train_new, Y_train, cv=5))* 100.0 acc.append(accuracy) if acc[-1]>mx_v: mx_v = acc[-1] mx_e = est print("maxes were: ",(mx_e,mx_v)) fig = plt.figure(figsize=(7,5)) subPlot = fig.add_subplot(111) subPlot.plot(ests,acc,linewidth=3) # From Comment by 'Ewald' at: # https://www.kaggle.com/c/job-salary-prediction/forums/t/4000/how-to-add-crossvalidation-to-scikit-randomforestregressor if True: num_folds = 10 num_instances = len(X_train_new) seed = 7 num_trees = 300 max_features = 'auto' kfold = cross_validation.KFold(n=num_instances, n_folds=num_folds, random_state=seed) model = RandomForestClassifier(n_estimators=num_trees, max_features=max_features, min_samples_leaf=50) results= cross_validation.cross_val_score(model, X_train_new, Y_train, cv=kfold, n_jobs=-1) print(results.mean()) # Another form of K-fold and hyperparameter tuning from: #http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html forest = RandomForestClassifier(max_features='sqrt') parameter_grid = { 'max_depth' : [3,4,5,6,7], 'n_estimators': [50,100,130,175,200,210,240,250], 'criterion': ['gini','entropy'] } cross_validation = StratifiedKFold(Y_train, n_folds=5) import timeit tic=timeit.default_timer() grid_search = GridSearchCV(forest, param_grid=parameter_grid, cv=cross_validation) grid_search.fit(X_train_new, Y_train) print('Best score: {}'.format(grid_search.best_score_)) print('Best parameters: {}'.format(grid_search.best_params_)) toc = timeit.default_timer() print("It took: ",toc-tic) # K Nearest Neighbors knn = KNeighborsClassifier(n_neighbors = 50) knn.fit(X_train_new, Y_train) Y_pred = knn.predict(X_test_new) #knn.score(X_train_new, Y_train) print('standard score ', knn.score(X_train_new, Y_train)) print('cv score ',np.mean(cross_val_score(knn, X_train_new, Y_train, cv=10))) # Gaussian Naive Bayes gaussian = GaussianNB() gaussian.fit(X_train_new, Y_train) Y_pred = gaussian.predict(X_test_new) #gaussian.score(X_train, Y_train) print('standard score ', gaussian.score(X_train_new, Y_train)) print('cv score ',np.mean(cross_val_score(gaussian, X_train_new, Y_train, cv=10))) # get Correlation Coefficient for each feature using Logistic Regression coeff_df = DataFrame(train_df.columns.delete(0)) coeff_df.columns = ['Features'] coeff_df["Coefficient Estimate"] = pd.Series(logreg.coef_[0]) # preview #coeff_df """ Explanation: Select top features for use in models End of explanation """ if False: submission = pd.DataFrame({ "PassengerId": test_df["PassengerId"], "Survived": Y_pred }) submission.to_csv('submission.csv', index=False) if False: ### Using XGboost #X_train = train_df.drop("Survived",axis=1) #train_X = train_df.drop("Survived",axis=1).as_matrix() X_train_new #X_train_new, Y_train, X_test_new #train_y = train_df["Survived"] Y_train #test_X = test_df.drop("PassengerId",axis=1).copy().as_matrix() X_test_new model = xgb.XGBClassifier(max_depth=10, n_estimators=300, learning_rate=0.05) model.fit(X_train_new, Y_train) predictions = model.predict(X_test_new) # plot feature importance plot_importance(model) plt.show() #X_train, X_test, y_train, y_test = train_test_split(X_train_new, Y_train, test_size=0.33) #accuracy = accuracy_score(y_test, predictions) #print("Accuracy: %.2f%%" % (accuracy * 100.0)) # basic try at iterative training with XGboost #train_X = train_df.drop("Survived",axis=1).as_matrix() #train_y = train_df["Survived"] #test_X = test_df.drop("PassengerId",axis=1).copy().as_matrix() # fit model on all training data acc = [] mx_v = 0 mx_e = 0 ests = range(10,500,10) if False: for est in ests: #print est model = xgb.XGBClassifier(max_depth=5, n_estimators=ests, learning_rate=0.05) X_train, X_test, y_train, y_test = train_test_split(X_train_new, Y_train, test_size=0.33)#, random_state=7) model.fit(X_train, y_train) predictions = model.predict(X_test) accuracy = accuracy_score(y_test, predictions) accuracy *= 100.0 acc.append(accuracy) #print("Accuracy: %.2f%%" % (accuracy)) if acc[-1]>mx_v: mx_v = acc[-1] mx_e = est print("maxes were: ",(mx_e,mx_v)) fig = plt.figure(figsize=(7,5)) subPlot = fig.add_subplot(111) subPlot.plot(ests,acc,linewidth=3) model = xgb.XGBClassifier(max_depth=5, n_estimators=300, learning_rate=0.05) for i in range(10): print "Iteration: "+str(i) # split data into train and test sets X_train, X_test, y_train, y_test = train_test_split(X_train_new, Y_train, test_size=0.33)#, random_state=7) model.fit(X_train, y_train) predictions = model.predict(X_test) accuracy = accuracy_score(y_test, predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) print "After rounds of training. Results on original training data:" predictions = model.predict(train_X) accuracy = accuracy_score(train_y, predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) # use feature importance for feature selection from numpy import loadtxt from numpy import sort from xgboost import XGBClassifier from sklearn.cross_validation import train_test_split from sklearn.metrics import accuracy_score from sklearn.feature_selection import SelectFromModel import timeit tic=timeit.default_timer() # load data #dataset = loadtxt('pima-indians-diabetes.csv', delimiter=",") # split data into X and y X = X_train_new Y = Y_train # split data into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=7) # fit model on all training data model = XGBClassifier(max_depth=10, nthread=100, n_estimators=300, learning_rate=0.05) model.fit(X_train, y_train) # make predictions for test data and evaluate y_pred = model.predict(X_test) predictions = [round(value) for value in y_pred] accuracy = accuracy_score(y_test, predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) toc = timeit.default_timer() print("It took: ",toc-tic) # Fit model using each importance as a threshold if False: thresholds = sort(model.feature_importances_) for thresh in thresholds: # select features using threshold selection = SelectFromModel(model, threshold=thresh, prefit=True) select_X_train = selection.transform(X_train) # train model selection_model = XGBClassifier(max_depth=10, nthread=100, n_estimators=300, learning_rate=0.05) selection_model.fit(select_X_train, y_train) # eval model select_X_test = selection.transform(X_test) y_pred = selection_model.predict(select_X_test) predictions = [round(value) for value in y_pred] accuracy = accuracy_score(y_test, predictions) print("Thresh=%.3f, n=%d, Accuracy: %.2f%%" % (thresh, select_X_train.shape[1], accuracy*100.0)) cv_params = {'max_depth': [3], 'min_child_weight': [1]} ind_params = {'learning_rate': 0.05, 'n_estimators': 100, 'nthread':100, 'seed':0, 'subsample': 0.8, 'colsample_bytree': 0.8, 'objective': 'binary:logistic'} optimized_GBM = GridSearchCV(xgb.XGBClassifier(**ind_params), cv_params, scoring = 'accuracy', cv = 2, n_jobs = -1) import timeit tic=timeit.default_timer() #X_train_new, Y_train, X_test_new #optimized_GBM.fit(X_train_new, Y_train) #optimized_GBM.grid_scores_ toc = timeit.default_timer() print("It took: ",toc-tic) """ Explanation: <font color='red'> NEXT: TRY TO PERFORM BOOSTING WITH SKLEARN, NOT XGBoost to see how it changes above results. THEN TRY TO BUILD SINGLE LAYER NEURAL NETWORKS TO SEE HOW THEY PERFORM, THEN TRY MULTI-LAYER NEURAL NETWORKS.</font> XGBoost stuff End of explanation """
utensil/julia-playground
packages/galgebra.py.ipynb
mit
from sympy import solve,sqrt g = '0 # #,# 0 #,# # 1' necl = Ga('X Y e',g=g) (X,Y,e) = necl.mv() X Y e (X^Y)*(X^Y) L = X^Y^e L B = (L*e).expand().blade_rep() B Bsq = B*B Bsq BsqScalar = Bsq.scalar() BsqScalar (Bsq - BsqScalar).simplify() == 0 BeBr = B*e*B.rev() BeBr B*B L*L (s,c,Binv,M,S,C,alpha) = symbols('s c (1/B) M S C alpha') XdotY = necl.g[0,1] Xdote = necl.g[0,2] Ydote = necl.g[1,2] XdotY Xdote Ydote Bhat = Binv*B Bhat R = c+s*Bhat # Rotor R = exp(alpha*Bhat/2) R Z = R*X*R.rev() Z Z.obj Z.obj = expand(Z.obj) Z.obj Z.obj = Z.obj.collect([Binv,s,c,XdotY]) Z.obj Z W = Z|Y W W.scalar() W = W.scalar() W = expand(W) W = simplify(W) W = W.collect([s*Binv]) W Bsq = Bsq.scalar() M = 1/Bsq M W = W.subs(Binv**2,M) W W = simplify(W) W Bmag = sqrt(XdotY**2-2*XdotY*Xdote*Ydote) Bmag W = W.collect([Binv*c*s,XdotY]) W W = W.subs(2*XdotY**2-4*XdotY*Xdote*Ydote,2/(Binv**2)) W = W.subs(2*c*s,S) W = W.subs(c**2,(C+1)/2) W = W.subs(s**2,(C-1)/2) W = simplify(W) W = W.subs(1/Binv,Bmag) W = expand(W) W # FIXME assert str(W.simplify()) == '(X.Y)*C - (X.e)*(Y.e)*C + (X.e)*(Y.e) + S*sqrt((X.Y)**2 - 2*(X.Y)*(X.e)*(Y.e))' W.simplify() Wd = collect(W,[C,S],exact=True,evaluate=False) Wd_1 = Wd[one] Wd_C = Wd[C] Wd_S = Wd[S] Wd # FIXME assert str(Wd_1) == '(X.e)*(Y.e)' Wd_1 Wd_1.simplify() # FIXME assert str(Wd_C) == '(X.Y) - (X.e)*(Y.e)' Wd_C Wd_C.simplify() Wd_S """ Explanation: test_noneuclidian_distance_calculation End of explanation """ X = (r, th, phi) = symbols('r theta phi') s3d = Ga('e_r e_theta e_phi', g=[1, r ** 2, r ** 2 * sin(th) ** 2], coords=X, norm=True) (er, eth, ephi) = s3d.mv() grad = s3d.grad f = s3d.mv('f', 'scalar', f=True) A = s3d.mv('A', 'vector', f=True) B = s3d.mv('B', 'bivector', f=True) r th phi er eth ephi grad f A B grad*f (grad|A).simplify() -s3d.I()*(grad^A) B|(eth^ephi) btp = B|(eth^ephi) str(btp) from printer import print_latex print_latex(btp) import printer """ Explanation: test_derivatives_in_spherical_coordinates End of explanation """ gp = printer.GaLatexPrinter({}) gp.doprint(btp) printer.GaLatexPrinter.split_super_sub(str(btp)) gp._print_Mv(btp) btp.Mv_latex_str() btp.components() str(btp.components()[0]) btp.Ga.blades_lst btp.Ga.blades_to_grades_dict printer.ostr(btp) btp.Mv_str() str(btp.base_rep()) btp.blade_coefs() btp.blade_coefs()[0] gp.latex(btp.blade_coefs()[0]) str(btp.blade_rep()) btp.coords btp.dual_mode_lst btp.grades (mode, name_lst, supers_lst, subs_lst) = printer.GaLatexPrinter.split_super_sub(str(btp)) mode name_lst supers_lst GaLatexPrinter = printer.GaLatexPrinter def translate(s): tmp = s parse_dict = {} i_sub = 1 for glyph in GaLatexPrinter.special_alphabet: if glyph in tmp: parse_sym = '????' + str(i_sub) parse_dict[parse_sym] = '\\' + glyph + ' ' tmp = tmp.replace(glyph, parse_sym) print tmp for parse_sym in parse_dict: tmp = tmp.replace(parse_sym, parse_dict[parse_sym]) for glyph in GaLatexPrinter.greek_translated: if glyph in tmp: tmp = tmp.replace(glyph, GaLatexPrinter.greek_translated[glyph]) return tmp translate('thetaphi') def translate_corrected(s): tmp = s parse_dict = {} i_sub = 1 for glyph in printer.GaLatexPrinter.special_alphabet: if glyph in tmp: parse_sym = '????' + str(i_sub) i_sub += 1 parse_dict[parse_sym] = '\\' + glyph + ' ' tmp = tmp.replace(glyph, parse_sym) print tmp for parse_sym in parse_dict: tmp = tmp.replace(parse_sym, parse_dict[parse_sym]) for glyph in GaLatexPrinter.greek_translated: if glyph in tmp: tmp = tmp.replace(glyph, GaLatexPrinter.greek_translated[glyph]) return tmp translate_corrected('thetaphi') grad str(grad) printer.GaLatexPrinter.split_super_sub(str(grad)) translate_corrected('e_theta*1/r*D{theta}') translate('e_theta*1/r*D{theta}') print translate(str(grad)) """ Explanation: ```py def latex(expr, **settings): return GaLatexPrinter(settings).doprint(expr) def print_latex(expr, settings): """Prints LaTeX representation of the given expression.""" print latex(expr, settings) ``` End of explanation """ cpns = (gr, gt, gph) = grad.components() cpns gp._print_Dop(gt) gt.Dop_latex_str() gt.terms def Dop_latex_str(self): if len(self.terms) == 0: return ' 0 ' self.consolidate_coefs() mv_terms = self.Dop_mv_expand(modes=simplify) s = '' for (sdop, base) in mv_terms: str_sdop = str(sdop) if base == S(1): s += str_sdop else: if str_sdop == '1': s += str(base) if str_sdop == '-1': s += '-' + str(base) if str_sdop[1:] != '1': s += ' ' + str_sdop[1:] else: if len(sdop.terms) > 1: if self.cmpflg: s += r'\left ( ' + str_sdop + r'\right ) ' + str(base) else: s += str(base) + ' ' + r'\left ( ' + str_sdop + r'\right ) ' else: if str_sdop[0] == '-' and not isinstance(sdop.terms[0][0], Add): if self.cmpflg: s += str_sdop + str(base) else: s += '-' + str(base) + ' ' + str_sdop[1:] else: if self.cmpflg: s += str_sdop + ' ' + str(base) else: s += str(base) + ' ' + str_sdop s += ' + ' s = s.replace('+ -','-') # Sdop.str_mode = False return s[:-3] Dop_latex_str(gt) terms = gt.Dop_mv_expand(modes=simplify) terms def Dop_latex_str_corrected(self): if len(self.terms) == 0: return ' 0 ' self.consolidate_coefs() mv_terms = self.Dop_mv_expand(modes=simplify) s = '' for (sdop, base) in mv_terms: str_base = printer.latex(base) str_sdop = printer.latex(sdop) print str_base print str_sdop if base == S(1): s += str_sdop else: if str_sdop == '1': s += str_base if str_sdop == '-1': s += '-' + str_base if str_sdop[1:] != '1': s += ' ' + str_sdop[1:] else: if len(sdop.terms) > 1: if self.cmpflg: s += r'\left ( ' + str_sdop + r'\right ) ' + str_base else: s += str_base + ' ' + r'\left ( ' + str_sdop + r'\right ) ' else: if str_sdop[0] == '-' and not isinstance(sdop.terms[0][0], Add): if self.cmpflg: s += str_sdop + str_base else: s += '-' + str_base + ' ' + str_sdop[1:] else: if self.cmpflg: s += str_sdop + ' ' + str_base else: s += str_base + ' ' + str_sdop s += ' + ' s = s.replace('+ -','-') # Sdop.str_mode = False return s[:-3] print Dop_latex_str_corrected(gt) """ Explanation: $$ e_rD{r} + e_\phi 1/rD{\phi } + e_\phi 1/(rsin(\phi ))D{\phi } $$ End of explanation """ print translate_corrected(str(grad)) """ Explanation: $$ \boldsymbol{e}_{\theta } 1/r D{theta} $$ End of explanation """ (gt0, gt1) = terms[0] gt0 print translate_corrected(str(gt0)) gp._print_Sdop(gt0) """ Explanation: $$ e_rD{r} + e_\theta 1/rD{\theta } + e_\phi 1/(rsin(\theta ))D{\phi } $$ End of explanation """
julienchastang/unidata-python-workshop
notebooks/Bonus/Siphon_XARRAY_Cartopy_HRRR.ipynb
mit
import matplotlib.pyplot as plt import numpy as np %matplotlib inline # Resolve the latest HRRR dataset from siphon.catalog import get_latest_access_url hrrr_catalog = "http://thredds.ucar.edu/thredds/catalog/grib/NCEP/HRRR/CONUS_2p5km/catalog.xml" latest_hrrr_ncss = get_latest_access_url(hrrr_catalog, "NetcdfSubset") # Set up access via NCSS from siphon.ncss import NCSS ncss = NCSS(latest_hrrr_ncss) # Create a query to ask for all times in netcdf4 format for # the Temperature_surface variable, with a bounding box query = ncss.query() query.all_times().accept('netcdf4').variables('Temperature_height_above_ground') query.lonlat_box(north=45, south=41, east=-67, west=-77) # Get the raw bytes and write to a file. data = ncss.get_data_raw(query) with open('test.nc', 'wb') as outf: outf.write(data) """ Explanation: <div style="width:1000 px"> <div style="float:right; width:98 px; height:98px;"> <img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;"> </div> <h1>Extract HRRR data using Unidata's Siphon package and Xarray</h1> <h3>Unidata Python Workshop</h3> <div style="clear:both"></div> </div> <hr style="height:2px;"> End of explanation """ import xarray as xr nc = xr.open_dataset('test.nc') nc var='Temperature_height_above_ground' ncvar = nc[var] ncvar grid = nc[ncvar.grid_mapping] grid lon0 = grid.longitude_of_central_meridian lat0 = grid.latitude_of_projection_origin lat1 = grid.standard_parallel earth_radius = grid.earth_radius """ Explanation: Try reading extracted data with Xarray End of explanation """ import cartopy import cartopy.crs as ccrs #cartopy wants meters, not km x = ncvar.x.data*1000. y = ncvar.y.data*1000. #globe = ccrs.Globe(ellipse='WGS84') #default globe = ccrs.Globe(ellipse='sphere', semimajor_axis=grid.earth_radius) crs = ccrs.LambertConformal(central_longitude=lon0, central_latitude=lat0, standard_parallels=(lat0,lat1), globe=globe) print(ncvar.x.data.shape) print(ncvar.y.data.shape) print(ncvar.data.shape) # find the correct time dimension name for d in ncvar.dims: if "time" in d: timevar = d nc[timevar].data[6] istep = 6 fig = plt.figure(figsize=(12,8)) ax = plt.axes(projection=ccrs.PlateCarree()) mesh = ax.pcolormesh(x,y,ncvar[istep,::].data.squeeze(), transform=crs,zorder=0) ax.coastlines(resolution='10m',color='black',zorder=1) gl = ax.gridlines(draw_labels=True) gl.xlabels_top = False gl.ylabels_right = False plt.title(nc[timevar].data[istep]); """ Explanation: Try plotting the LambertConformal data with Cartopy End of explanation """
Duke-GCB/cwl-freezer
cwl-freezing.ipynb
mit
workflow = parse('/Users/dcl9/Code/python/mmap-cwl/mmap.cwl') """ Explanation: Questions Could this be a CWL compiler? WIll it take a root document and return the whole structure? Can I find the dockerRequirement anywhere in the doc? Can I find the dockerRequirement using the schema? 1. CWL Docker Compiler What does that mean? Abstractly, that it would read an input document, look for all docker requirements and hints, pull the images, and then write a shell script to reload everything 2. Root document and return whole structure? End of explanation """ # This function will find dockerImageId anyhwere in the tree def find_key(d, key, path=[]): if isinstance(d, list): for i, v in enumerate(d): for f in find_key(v, key, path + [str(i)]): yield f elif isinstance(d, dict): if key in d: pathstring = '/'.join(path + [key]) yield pathstring for k, v in d.items(): for f in find_key(v, key, path + [k]): yield f # Could adapt to find class: DockerRequirement instead for x in find_key(workflow, 'dockerImageId'): print x, dpath.util.get(workflow, x) dpath.util.get(workflow, 'steps/0/run/steps/0/run/hints/0') """ Explanation: Yes, that works End of explanation """ def image_names(workflow): image_ids = [] for x in find_key(workflow, 'dockerImageId'): image_id = dpath.util.get(workflow, x) if image_id not in image_ids: image_ids.append(image_id) return image_ids image_names(workflow) import docker def docker_hashes(image_ids): for name in image_ids: print name docker_hashes(image_names(workflow)) """ Explanation: extract docker image names End of explanation """ %%sh eval $(docker-machine env default) import docker_io images = get_image_metadata(client, 'dukegcb/xgenovo') for img in images: write_image(client, img, '/tmp/images') md """ Explanation: Docker IO Query docker for the sha of the docker image id End of explanation """
rcrehuet/Python_for_Scientists_2017
notebooks/Pandas_Github_Day3.ipynb
gpl-3.0
data_file ='usagov_bitly_data2012-03-16-1331923249.txt' file = open(data_file) file.readline() """ Explanation: Title: Data Analysis with Python: Overview of Pandas Author: Fermín Huarte Larrañaga Created: 2015 Version: 2.0 Date: June 2017 Bibliography This IPython Notebook is based almost completely on: "Python for Data Analysis" by Wes McKinney, Ed. O'Reilly, 2012. Online resources: Data Analysis with Python: Overview of Pandas The aim of this session is to have a first experience with tools that should allow you to manipulate, process, clean, and crunch data using Python. By "data" we are referring to structured data such as: - multidimensional arrays - tabular or spreadsheet-like data - time series (no necessarily evenly spaced!) - multiple data related by key columns Why Python when analyzing data? There a considerable amount of alternatives when it comes to analyzing large sets of data such as R, MATLAB, SAS, Stata, and others. Efficient as they may be, they are often restricted to a small area of application. The versatility of Python and the growing comunity of Python users in the scientific domain has provided an remarkable improvement in its library support during the recent years, becoming a strong competitor for data manipulation tasks. Added to Python's strength as a general purpose programming language it becomes an excellent choice as a single platform to develop a data analysis application. Essential libraries we will be using: - NumPy - pandas (new!) - matplotlib - IPython - SciPy Wetting our appetite Before learning the basics of data analysis with pandas we will emulate author of the pandas module and start by running a not so simple. Do not intend to fully understand the instructions given in the next cells. They will be introduced along the session. Relax, try to understand its logic, and enjoy! ;-) Data from a URL shortenning service In 2011, the URL shortenning service named bit.ly partnered with the US governement website usa.gov to provide a food of anonymous data gathered from users who shorten links ending with .gov or .mil. This data (updated daily with hourly snapshots) can be downloaded as text files. Each line in the hourly snapshot data file contains a JSON (JavaScript Object Notation) form. Will work with this data using first standard built-in Python, then the collections module and finally, pandas. Standard Python The following lines will open such file and display its contents. Please, download this data file and run the cell. End of explanation """ import json data_file = 'usagov_bitly_data2012-03-16-1331923249.txt' records = [json.loads(line) for line in open(data_file)] """ Explanation: Now instead, of opening it as a simple text file, we will load the lines in the JSON file into a dictionary object. Let us read de data set using JavaScript Object Notation json module (we will not cover this topic) End of explanation """ print("Variable records is {} and its elements are {}.".format(type(records),type(records[0]))) """ Explanation: Thanks to the json module now variable records is a list of dictionaries, imported from the JSON form. End of explanation """ import collections print("First counter is of ", type(counter)) counter = collections.Counter(counter) #generate an instance to the Counter class using our counter variable print("Now counter is of ", type(counter)) #The Counter class has new useful functionalities counter.most_common(10) """ Explanation: Let us have a look at the first element: As said before, this is a typical dictionary structure with keys and values. Find out the keys: Find out the value for the key 'tz' in this first record: In this case (and we were not supposed to know this) 'tz' stands for time zone. Suppose we are interested in identifying the most commonly found time zones in the set of data we just imported. Surely, each one of you will find a different way to work around it. First, we want to obtain a list of all time zones found in the list, name the list as list_of_timezones: Check the length of list_of_timezones and, for instance its first ten elements: Try to think of an algorithm to count the occurences of the different timezones (including the blank field ' '). Hint: You might want to use a dictionary to store the occurence (If you can't solve it, follow this link for a possible solution) How often does 'America/Sao_Paulo' appear? How many different timezones are there? Find out the top 10 time zones (sample code) Collections module The Python standard library provides the collections module that contains the collections.Counter class. This does the job that we just made but in a nicer way: End of explanation """ import numpy as np import pandas as pd from pandas import Series, DataFrame myframe = DataFrame(records) myframe """ Explanation: The pandas alternative Now, let us do the same work using pandas. The main pandas data structure is the DataFrame. It can be seen as a representation of a table or spreadsheet of data. First, we will create the DataFrame from the original data file: End of explanation """ type(myframe) """ Explanation: myframe is now a DataFrame, a class introduced by pandas to efficiently work with structured data. Check the type: End of explanation """ type(myframe['tz']) """ Explanation: The DataFrame object is composed of Series (another pandas object), They can be seen as the columns of a spreadsheet. For instance, myframe['tz']. End of explanation """ tz_counter = myframe['tz'].value_counts() """ Explanation: Check the time zones ('tz') in the first ten records of myframe. The Series object has a useful method: value_counts: End of explanation """ # This line configures matplotlib to show figures embedded in the notebook, # instead of opening a new window for each figure. %matplotlib inline """ Explanation: In one line of code, all the timezones are grouped and accounted for. Check the result and get the top 5 time zones: Much few lines of work, right? As we have said repeatedly, there is no need to reinvent the wheel. Probably someone out there solved your problem before you ran into it and, unless you are really really good, that solution is probably better than yours! ;-) Next, we might want to plot the data using the matplotlib library. End of explanation """ tz_counter[:10].plot(kind='barh', rot=0) """ Explanation: Pandas can call matplotlib directly without calling the module explicitly. We will make an histogramatic plot: End of explanation """ #First we generate new Series from a column of myframe, when a value is 'NaN' insert the word 'Missing' clean_tz = myframe['tz'].fillna('Missing') #In this new Series replace EMPTY VALUES with the word 'Unknown'. Try to understand BOOLEAN INDEXING clean_tz[clean_tz == ''] = 'Unknown' #Use the method VALUE_COUNTS to generate a new Series containing time zones and occurrences tz_counter = clean_tz.value_counts() #Finally, plot the top ten values tz_counter[:10].plot(kind='barh', rot=0) """ Explanation: It is kind of odd to realize that the second most popular timezone has a blank label. The DataFrame object of pandas has a fillna function that can replace missing (NA) values or empty strings: End of explanation """ browser = Series([x.split()[0] for x in myframe.a.dropna()]) """ Explanation: Let's complicate the example a bit more. The 'a' field in the datasheet contains information on the browser used to perform the URL shortening. For example, check the content of the 'a' field in the first record of myframe: Let's generate a Series from the datasheet containing all the browser data: try to understand the following line. A good strategy might be to work it out by pieces: python myframe.a myframe.a.dropna() End of explanation """ cframe = myframe[myframe.a.notnull()] os = np.where(cframe['a'].str.contains('Windows'),'Windows','Not Windows') tz_and_os = cframe.groupby(['tz',os]) agg_counter = tz_and_os.size().unstack().fillna(0) agg_counter[:10] """ Explanation: As we did with the time zones, we can use the value_counts method on the browser Series to see the most common browsers: Let's decompose the top time zones into people using Windows and not using Windows. This piece of code requires some more knowledge of pandas, skip the details for now: End of explanation """ #Use to sort in ascending order indexer = agg_counter.sum(1).argsort() indexer[:10] """ Explanation: Let's select the top overall time zones. To do so, we construct an indirect index array from the row counts in agg_counter End of explanation """ count_subset = agg_counter.take(indexer)[-10:] count_subset count_subset.plot(kind='barh', stacked='True') """ Explanation: Next, use take to select the rows in that order and then slice off the last 10 rows: End of explanation """ subset_normalized = count_subset.div(count_subset.sum(1), axis=0) subset_normalized.plot(kind='barh', stacked='True') """ Explanation: Same plot, but percentages instead of absolute numbers End of explanation """ from pandas import Series, DataFrame import pandas as pd """ Explanation: After this example. We will go through the basics of pandas. DataFrame and Series The two basic Data Structures introduced by pandas are DataFrame and Series. End of explanation """ o1 = Series([-4, 7, 11, 13, -22]) o1 """ Explanation: Series A Series is a one-dimensional array-like object containing an array of data (any NumPy data type is fine) and associated array of data labels, called index. The simplest Series one can think of would be formed only by an array of data: End of explanation """ o1.values o1.index """ Explanation: Notice that the representation of the Series shows the index on the left and the values on the right. No index was specified when the Series was created and a default one has been assigned: integer number from 0 to N-1 (N would be de length of the data array). End of explanation """ o2 = Series([7, 0.2, 11.3, -5], index=['d','e','a','z']) o2 o2.index """ Explanation: If we need to specify the index: End of explanation """ o2['e'] """ Explanation: Unlike NumPy arrays, we can use values in the index when selecting single values from a set of values: End of explanation """ o2.e """ Explanation: The following is also equivalent: End of explanation """ o2[['z','d']] """ Explanation: Values correspondong to two indices (notice double square brackets!): End of explanation """ o2[o2 > 0] #filter positive elements in o2, the indices are conserved. Compare with the same operation in a NumPy array!!! o2*np.pi """ Explanation: NumPy array operations, such as masking using a boolean array, scalar broadcasting, or applying mathematical functions, preserve the index-value link: End of explanation """ 'z' in o2 """ Explanation: Pandas Series have also been described as a fixed.length, ordered dictionary, since it actually maps index values to data values. Many functions that expect a dict can be used with Series: End of explanation """ pop_data = {'Shanghai': 24150000, 'Karachi': 23500000, 'Beijing': 21516000, 'Tianjin': 14722000, 'Istanbul': 14377000} print ("pop_data is of type ",type(pop_data)) ser1 = Series(pop_data) print("ser1 is of type ",type(ser1)) print("Indices of the Series are: ",ser1.index) print("Values of the Series are: ",ser1.values) """ Explanation: A Python dictionary can be used to create a pandas Series, here is a list of the top 5 most populated cities (2015) according to Wikipedia: End of explanation """ cities = ['Karachi', 'Istanbul', 'Beijing', 'Moscow'] ser2 = Series(pop_data, index=cities) ser2 """ Explanation: As you just checked, when passing the dictionary the resulting Series uses the dict keys as indices and sorts the values corresponding to the index. In the next case we create a Series from a dictionary but selecting the indices we are interested in: End of explanation """ pd.isnull(ser2) ser2.isnull() ser2.notnull() """ Explanation: Note that the values found in pop_data have been placed in the appropiate locations. No data was found for 'Moscow' and value NaN is assigned. This is used in pandas to mark missing or not available (NA) values. In order to detect missing data in pandas, one should use the isnull and notnull (both present as functions and Series methods): End of explanation """ ser1 + ser2 """ Explanation: An important feature of Series to be highlighted here is that Series are automatically aligned when performing arithmetic operations. It doesn't make much sense to add the population data but... End of explanation """ ser1.name = 'population' ser1.index.name = 'city' ser1 """ Explanation: We can assign names to both the Series object and its index using the name attribute: End of explanation """ data = {'city': ['Madrid', 'Madrid','Madrid','Barcelona','Barcelona','Sevilla','Sevilla','Girona','Girona','Girona'], 'year': ['2002', '2006', '2010', '2006', '2010', '2002', '2010', '2002', '2006', '2010'], 'pop': [5478405, 5953604, 6373532, 5221848, 5488633, 1732697, 1902956, 568690, 668911, 741841]} pop_frame = DataFrame(data) pop_frame """ Explanation: DataFrame The DataFrame object represents a tabular, spreadsheet-like data structure containing an ordered collection of columns, each of which can be a different value type. The DataFrame has both a row and column index and it can be seen as a dictionary of Series. Under the hood, the data is stored as one or more 2D blocks rather than a list, dict, or some other collection of 1D arrays. See the following example: End of explanation """ DataFrame(data, columns=['year','city','pop']) """ Explanation: The resulting DataFrame has automatically assigned indices and the columns are sorted. This order can be altered if we specify a sequence of colums: End of explanation """ pop_frame2 = DataFrame(data, columns=['year','city','pop','births']) pop_frame2 """ Explanation: You should keep in mind that indices in the DataFrame are Index objects, they have attributes (such as name as we will see) and are immutable. What will happen if we pass a column that is not contained in the data set? End of explanation """ pop_frame2['city'] """ Explanation: A column in a DataFrame can be retrieved as a Series in two ways: using dict-like notation End of explanation """ pop_frame2.city """ Explanation: using the DataFrame attribute End of explanation """ pop_frame2['births'] = 100000 pop_frame2 """ Explanation: Columns can be modified by assignment. Let's get rid of those NA values in the births column: End of explanation """ birth_series = Series([100000, 15000, 98000], index=[0,2,3]) pop_frame2['births'] = birth_series pop_frame2 """ Explanation: When assigning lists or arrays to a column, the values' length must match the length of the DataFrame. If we assign a Series it will be instead conformed exactly to the DataFrame's index, inserting missing values in any holes: End of explanation """ pop_frame2['Catalunya'] = ((pop_frame2.city == 'Barcelona') | (pop_frame2.city == 'Girona')) pop_frame2 del pop_frame2['Catalunya'] pop_frame2.columns """ Explanation: Assigning a column that doesn't exist will result in creating a new column. Columns can be deleted using the del command: End of explanation """ pop_data = {'Madrid': {'2002': 5478405, '2006': 5953604, '2010': 6373532}, 'Barcelona': {'2006': 5221848, '2010': 5488633}, 'Sevilla': {'2002': 1732697, '2010': 1902956}, 'Girona': {'2002': 568690, '2006': 668911, '2010': 741841}} pop_frame3 = DataFrame(pop_data) pop_frame3 """ Explanation: Alternatively, the DataFrame can be built from a nested dict of dicts: End of explanation """ pop_frame3.columns.name = 'city'; pop_frame3.index.name = 'year' pop_frame3 """ Explanation: The outer dict keys act as the columns and the inner keys as the unioned row indices. Possible data inputs to construct a DataFrame: 2D NumPy array dict of arrays, lists, or tuples dict of Series dict of dicts list of dicts or Series List of lists or tuples DataFrames NumPy masked array As in Series, the index and columns in a DataFrame have name attributes: End of explanation """ pop_frame3.values """ Explanation: Similarly, the values attribute returns de data contained in the DataFrame as a 2D array: End of explanation """ #Inverting the row indices and adding some more years years = ['2010', '2008', '2006', '2004', '2002'] pop_frame4 = pop_frame3.reindex(years) """ Explanation: Basic functionality We will not cover all the possible operations using Pandas and the related data structures. We will try to cover some of the basics. Reindexing A critical method in pandas is reindex. This implies creating a new object with the data of a given structure but conformed to a new index. For instance: Extract the column of pop_frame3 belonging to Barcelona Check the type of the column, it should be a Series Find out the indices of the Barcelona Series Call reindex on the Barcelona Series to rearrange the data to a new index [2010, 2008, 2006, 2004, 2002], following this example: python obj = Series([4.5, 7.2, -5.3, 3.6], index=['d', 'b', 'a', 'c']) obj2 = obj.reindex(['a', 'b', 'c', 'd', 'e']) The reindex method can be combined with the fill_value= option in the non existing values: python obj2 = obj.reindex(['a', 'b', 'c', 'd', 'e'], fill_value=0) It does'nt make much sense in this case to estimate the non-existing values as zeros. For ordered data such as time series, we can use interpolation or foward/backward filling. In the case of DataFrames, reindex can alter either (row) index, column or both. End of explanation """ cities = ['Madrid', 'Sevilla','Barcelona', 'Girona'] pop_frame4 = pop_frame4.reindex(columns=cities) """ Explanation: If instead we want to reindex the columns, we need to use the columns keyword: End of explanation """ pop_frame4 = pop_frame3.reindex(index = years, columns = cities) """ Explanation: Both at once: End of explanation """ pop_bcn2.drop(['2002', '2008']) """ Explanation: reindex function arguments Dropping entries from an axis From the Barcelona population Series let's get rid of years 2002 and 2008: End of explanation """ pop_bcn2.index.name = 'year' """ Explanation: Check that the object was not modified. In DataFrame, index values can be deleted from both axes. Use the IPython help to find out the use of drop and get rid of all the data related to Madrid and year 2002: Indexing, selection, and filtering Series Indexing in Series works similarly to NumPy array indexing. The main difference is that we can actually use the Serie's index instead of integer numbers End of explanation """ pop_bcn2[:2] pop_bcn2['2002':'2006'] """ Explanation: Population in Barcelona in year 2006? Boolean indexing, non-zero data in Barcelona? Important: Slicing with labels behaves differently than normal Python slicing the endpoint is inclusive! Give it a try: End of explanation """
ds-modules/LINGUIS-110
FormantsUpdated/Assignment.ipynb
mit
# DON'T FORGET TO RUN THIS CELL import math import numpy as np import pandas as pd import seaborn as sns import datascience as ds import matplotlib.pyplot as plt sns.set_style('darkgrid') %matplotlib inline import warnings warnings.filterwarnings('ignore') """ Explanation: Linguistics 110: Vowel Formants Professor Susan Lin In this notebook, we use both data from an outside source and that the class generated to explore the relationships between formants, gender, and height. Table of Contents 1 - Exploring TIMIT Data 2 - Using the Class's Data 3 - Vowel Spaces 4 - Variation in Vowel Spaces 5 - Formants vs Height Remember that to run a cell, you can either click the play button in the toolbar, or you can press shift and enter on your keyboard. To get a quick review of Jupyter notebooks, you can look at the VOT Notebook. Make sure to run the following cell before you get started. End of explanation """ timit = pd.read_csv('data/timitvowels.csv') timit.head() """ Explanation: Exploring TIMIT Data <a id='timit'></a> We will start off by exploring TIMIT data taken from 8 different regions. These measurements are taken at the midpoint of vowels, where vowel boundaries were determined automatically using forced alignment. Uploading the data Prior to being able to work with the data, we have to upload our dataset. The following two lines of code will read in our data and create a dataframe. The last line of code prints the timit dataframe, but instead of printing the whole dataframe, by using the method .head, it only prints the first 5 rows. End of explanation """ IPAdict = {"AO" : "ɔ", "AA" : "ɑ", "IY" : "i", "UW" : "u", "EH" : "ɛ", "IH" : "ɪ", "UH":"ʊ", "AH": "ʌ", "AX" : "ə", "AE":"æ", "EY" :"eɪ", "AY": "aɪ", "OW":"oʊ", "AW":"aʊ", "OY" :"ɔɪ", "ER":"ɚ"} timit['vowel'] = [IPAdict[x] for x in timit['vowel']] timit.head() """ Explanation: Look at the dataframe you created and try to figure out what each column measures. Each column represents a different attribute, see the following table for more information. |Column Name|Details| |---|---| |speaker|unique speaker ID| |gender|Speaker’s self-reported gender| |region|Speaker dialect region number| |word|Lexical item (from sentence prompt)| |vowel|Vowel ID| |duration|Vowel duration (seconds)| |F1/F2/F3/f0|f0 and F1-F3 in BPM (Hz)| Sometimes data is encoded with with an identifier, or key, to save space and simplify calculations. Each of those keys corresponds to a specific value. If you look at the region column, you will notice that all of the values are numbers. Each of those numbers corresponds to a region, for example, in our first row the speaker, cjf0, is from region 1. That corresponds to New England. Below is a table with all of the keys for region. |Key|Region| |---|---| |1|New England| |2|Northern| |3|North Midland| |4|South Midland| |5|Southern| |6|New York City| |7|Western| |8|Army Brat| Transformations When inspecting data, you may realize that there are changes to be made -- possibly due to the representation to the data or errors in the recording. Before jumping into analysis, it is important to clean the data. One thing to notice about timit is that the column vowel contains ARPABET identifiers for the vowels. We want to convert the vowel column to be IPA characters, and will do so in the cell below. End of explanation """ timit_avg = timit.groupby(['speaker', 'vowel', 'gender', 'region']).mean().reset_index() timit_avg.head() """ Explanation: Most of the speakers will say the same vowel multiple times, so we are going to average those values together. The end result will be a dataframe where each row represents the average values for each vowel for each speaker. End of explanation """ timit_avg.gender.unique() """ Explanation: Splitting on Gender Using the same dataframe from above, timit_avg, we are going to split into dataframes grouped by gender. To identify the possible values of gender in the gender column, we can use the method .unique on the column. End of explanation """ timit_female = timit_avg[timit_avg['gender'] == 'female'] timit_male = timit_avg[timit_avg['gender'] == 'male'] """ Explanation: You could see that for this specific dataset there are only "female" and "male" values in the column. Given that information, we'll create two subsets based off of gender. We'll split timit_avg into two separate dataframes, one for females, timit_female, and one for males, timit_male. Creating these subset dataframes does not affect the original timit_avg dataframe. End of explanation """ sns.distplot(timit_female['F1'], kde_kws={"label": "female"}) sns.distplot(timit_male['F1'], kde_kws={"label": "male"}) plt.title('F1') plt.xlabel("Hz") plt.ylabel('Proportion per Hz'); """ Explanation: Distribution of Formants We want to inspect the distributions of F1, F2, and F3 for those that self-report as male and those that self-report as female to identify possible trends or relationships. Having our two split dataframes, timit_female and timit_male, eases the plotting process. Run the cell below to see the distribution of F1. End of explanation """ sns.distplot(timit_female['F2'], kde_kws={"label": "female"}) sns.distplot(timit_male['F2'], kde_kws={"label": "male"}) plt.title('F2') plt.xlabel("Hz") plt.ylabel('Proportion per Hz'); """ Explanation: Does there seem to be a notable difference between male and female distributions of F1? Next, we plot F2. End of explanation """ sns.distplot(timit_female['F3'], kde_kws={"label": "female"}) sns.distplot(timit_male['F3'], kde_kws={"label": "male"}) plt.title('F3') plt.xlabel("Hz") plt.ylabel('Proportion per Hz'); """ Explanation: Finally, we create the same visualization, but for F3. End of explanation """ # reading in the data class_data = pd.read_csv('data/110_formants.csv') class_data.head() """ Explanation: Do you see a more pronounced difference across the the different F values? Are they the same throughout? Can we make any meaningful assumptions from these visualizations? An additional question: How do you think the fact that we average each vowel together first for each individual affects the shape of the histograms? Using the Class's Data <a id='cls'></a> This portion of the notebook will rely on the data that was submit for HW5. Just like we did for the TIMIT data, we are going to read it into a dataframe and modify the column vowel to reflect the corresponding IPA translation. We will name the dataframe class_data. End of explanation """ # translating the vowel column class_data['vowel'] = [IPAdict[x] for x in class_data['vowel']] class_data.head() """ Explanation: The ID column contains a unique value for each individual. Each individual has a row for each of the different vowels they measured. End of explanation """ class_data['Gender'].unique() """ Explanation: Splitting on Gender As we did with the TIMIT data, we are going to split class_data based on self-reported gender. We need to figure out what the possible responses for the column were. End of explanation """ class_female = class_data[class_data['Gender'] == 'Female'] class_male = class_data[class_data['Gender'] == 'Male'] """ Explanation: Notice that there are three possible values for the column. We do not have a large enough sample size to responsibly come to conclusions for Prefer not to answer, so for now we'll compare Male and Female. We'll call our new split dataframes class_female and class_male. End of explanation """ sns.distplot(class_female['F1'], kde_kws={"label": "female"}) sns.distplot(class_male['F1'], kde_kws={"label": "male"}) plt.title('F1') plt.xlabel("Hz") plt.ylabel('Proportion per Hz'); """ Explanation: Comparing Distributions The following visualizations compare the the distribution of formants for males and females, like we did for the TIMIT data. First, we'll start with F1. End of explanation """ sns.distplot(class_female['F2'], kde_kws={"label": "female"}) sns.distplot(class_male['F2'], kde_kws={"label": "male"}) plt.title('F2') plt.xlabel("Hz") plt.ylabel('Proportion per Hz'); """ Explanation: Next is F2. End of explanation """ sns.distplot(class_female['F3'], kde_kws={"label": "female"}) sns.distplot(class_male['F3'], kde_kws={"label": "male"}) plt.title('F3') plt.xlabel("Hz") plt.ylabel('Proportion per Hz'); """ Explanation: And finally F3. End of explanation """ def plot_blank_vowel_chart(): im = plt.imread('images/blankvowel.png') plt.imshow(im, extent=(plt.xlim()[0], plt.xlim()[1], plt.ylim()[0], plt.ylim()[1])) def plot_vowel_space(avgs_df): plt.figure(figsize=(10, 8)) plt.gca().invert_yaxis() plt.gca().invert_xaxis() vowels = ['eɪ', 'i', 'oʊ', 'u', 'æ', 'ɑ', 'ɚ', 'ɛ', 'ɪ', 'ʊ', 'ʌ'] + ['ɔ'] for i in range(len(avgs_df)): plt.scatter(avgs_df.loc[vowels[i]]['F2'], avgs_df.loc[vowels[i]]['F1'], marker=r"$ {} $".format(vowels[i]), s=1000) plt.ylabel('F1') plt.xlabel('F2') """ Explanation: Do the spread of values appear to be the same for females and males? Do the same patterns that occur in the TIMIT data appear in the class's data? Vowel Spaces <a id='vs'></a> Run the cell below to define some functions that we will be using. End of explanation """ class_vowel_avgs = class_data.drop('ID', axis=1).groupby('vowel').mean() class_vowel_avgs.head() timit_vowel_avgs = timit.groupby('vowel').mean() timit_vowel_avgs.head() """ Explanation: We are going to be recreating the following graphic from this website. Before we can get to creating, we need to get a singular value for each column for each of the vowels (so we can create coordinate pairs). To do this, we are going to find the average formant values for each of the vowels in our dataframes. We'll do this for both timit and class_data. End of explanation """ plot_vowel_space(class_vowel_avgs) plt.xlabel('F2 (Hz)') plt.ylabel('F1 (Hz)'); """ Explanation: Each of these new tables has a row for each vowel, which comprisises of the averaged values across all speakers. Plotting the Vowel Space Run the cell below to construct a vowel space for the class's data, in which we plot F1 on F2. Note that both axes are descending. End of explanation """ log_timit_vowels = timit_vowel_avgs.apply(np.log) log_class_vowels = class_vowel_avgs.apply(np.log) class_data['log(F1)'] = np.log(class_data['F1']) class_data['log(F2)'] = np.log(class_data['F2']) log_class_vowels.head() """ Explanation: Using Logarithmic Axes In our visualization above, we use linear axes in order to construct our vowel space. The chart we are trying to recreate has logged axes (though the picture does not indicate it). Below we log-transform all of the values in our dataframes. End of explanation """ plot_vowel_space(log_class_vowels) plt.xlabel('log(F2) (Hz)') plt.ylabel('log(F1) (Hz)'); """ Explanation: Below we plot the vowel space using these new values. End of explanation """ plot_vowel_space(log_class_vowels) plot_blank_vowel_chart() plt.xlabel('log(F2) (Hz)') plt.ylabel('log(F1) (Hz)'); """ Explanation: What effect does using the logged values have, if any? What advantages does using these values have? Are there any negatives? This paper might give some ideas. Overlaying a Vowel Space Chart Finally, we are going to overlay a blank vowel space chart outline to see how close our data reflects the theoretical vowel chart. End of explanation """ plot_vowel_space(log_timit_vowels) plot_blank_vowel_chart() plt.xlabel('log(F2) (Hz)') plt.ylabel('log(F1) (Hz)'); """ Explanation: How well does it match the original? Below we generate the same graph, except using the information from the TIMIT dataset. End of explanation """ sns.lmplot('log(F2)', 'log(F1)', hue='vowel', data=class_data, fit_reg=False, size=8, scatter_kws={'s':30}) plt.xlim(8.2, 6.7) plt.ylim(7.0, 5.7); """ Explanation: How does the TIMIT vowel space compare to the vowel space from our class data? What may be the cause for any differences between our vowel space and the one constructed using the TIMIT data? Do you notice any outliers or do any points that seem off? Variation in Vowel Spaces <a id='vvs'></a> In the following visualizations, we are going to show each individual vowel from each person in the F2 and F1 dimensions (logged). Each color corresponds to a different vowel -- see the legend for the exact pairs. End of explanation """ plt.figure(figsize=(10, 12)) pick_vowel = lambda v: class_data[class_data['vowel'] == v] colors = ['Greys_r', 'Purples_r', 'Blues_r', 'Greens_r', 'Oranges_r', \ 'Reds_r', 'GnBu_r', 'PuRd_r', 'winter_r', 'YlOrBr_r', 'pink_r', 'copper_r'] for vowel, color in list(zip(class_data.vowel.unique(), colors)): vowel_subset = pick_vowel(vowel) sns.kdeplot(vowel_subset['log(F2)'], vowel_subset['log(F1)'], n_levels=1, cmap=color, shade=False, shade_lowest=False) for i in range(1, len(class_data)+1): plt.scatter(class_data['log(F2)'][i], class_data['log(F1)'][i], color='black', linewidths=.5, marker=r"$ {} $".format(class_data['vowel'][i]), s=40) plt.xlim(8.2, 6.7) plt.ylim(7.0, 5.7); """ Explanation: In the following visualization, we replace the colors with the IPA characters and attempt to clump the vowels together. End of explanation """ genders = class_data['Gender'] plotting_data = class_data.drop('vowel', axis=1)[np.logical_or(genders == 'Male', genders == 'Female')] maxes = plotting_data.groupby(['ID', 'Gender']).max().reset_index()[plotting_data.columns[:-2]] maxes.columns = ['ID', 'Language', 'Gender', 'Height', 'Max F1', 'Max F2', 'Max F3'] maxes_female = maxes[maxes['Gender'] == 'Female'] maxes_male = maxes[maxes['Gender'] == 'Male'] maxes.head() """ Explanation: Formants vs Height <a id='fvh'></a> We are going to compare each of the formants and height to see if there is a relationship between the two. To help visualize that, we are going to plot a regression line, which is also referred to as the line of best fit. We are going to use the maximum of each formant to compare to height. So for each speaker, we will calculate their greatest F1, F2, and F3 across all vowels, then compare one of those to their height. We create the necessary dataframe in the cell below using the class's data. End of explanation """ sns.regplot('Height', 'Max F1', data=maxes) sns.regplot('Height', 'Max F1', data=maxes_male, fit_reg=False) sns.regplot('Height', 'Max F1', data=maxes_female, fit_reg=False) plt.xlabel('Height (cm)') plt.ylabel('Max F1 (Hz)') print('female: green') print('male: orange') """ Explanation: First we will plot Max F1 against Height. Note: Each gender has a different color dot, but the line represents the line of best fit for ALL points. End of explanation """ sns.regplot('Height', 'Max F2', data=maxes) sns.regplot('Height', 'Max F2', data=maxes_male, fit_reg=False) sns.regplot('Height', 'Max F2', data=maxes_female, fit_reg=False) plt.xlabel('Height (cm)') plt.ylabel('Max F2 (Hz)') print('female: green') print('male: orange') """ Explanation: Is there a general trend for the data that you notice? What do you notice about the different color dots? Next, we plot Max F2 on Height. End of explanation """ sns.regplot('Height', 'Max F3', data=maxes) sns.regplot('Height', 'Max F3', data=maxes_male, fit_reg=False) sns.regplot('Height', 'Max F3', data=maxes_female, fit_reg=False) plt.xlabel('Height (cm)') plt.ylabel('Max F3 (Hz)') print('female: green') print('male: orange') """ Explanation: Finally, Max F3 vs Height. End of explanation """ sns.lmplot('Height', 'Max F1', data=maxes, hue='Gender') plt.xlabel('Height (cm)') plt.ylabel('Max F1 (Hz)'); """ Explanation: Do you notice a difference between the trends for the three formants? Now we are going to plot two lines of best fit -- one for males, one for females. Before we plotted one line for all of the values, but now we are separating by gender to see if gender explains some of the difference in formants values. For now, we're going deal with just Max F1. End of explanation """ timit_maxes = timit.groupby(['speaker', 'gender']).max().reset_index() timit_maxes.columns = ['speaker', 'gender', 'region', 'height', 'word', 'vowel', 'Max duration', 'Max F1', 'Max F2', 'Max F3', 'Max f0'] plt.xlim(140, 210) plt.ylim(500, 1400) sns.regplot('height', 'Max F1', data=timit_maxes[timit_maxes['gender'] == 'female'], scatter_kws={'alpha':0.3}) sns.regplot('height', 'Max F1', data=timit_maxes[timit_maxes['gender'] == 'male'], scatter_kws={'alpha':0.3}) sns.regplot('height', 'Max F1', data=timit_maxes, scatter=False) plt.xlabel('Height (cm)') plt.ylabel('Max F1 (Hz)'); """ Explanation: Is there a noticeable difference between the two? Did you expect this result? We're going to repeat the above graph, plotting a different regression line for males and females, but this time, using timit -- having a larger sample size may help expose patterns. Before we do that, we have to repeat the process of calulating the maximum value for each formants for each speaker. Run the cell below to do that and generate the plot. The blue dots are females, the orange dots are males, and the green line is the regression line for all speakers. End of explanation """
StephenHarrington/bitcoin-examples
Regtest_RPC.ipynb
mit
#!/bin/bash #regtest_start_network.sh import os import shutil #os.system("killall --regex bitcoin.*") idir = os.environ['HOME']+'/regtest' if os.path.isdir(idir): shutil.rmtree(idir) os.mkdir(idir) connects = {'17591' : '17592', '17592' : '17591'} for port in connects.keys(): adir = idir+'/'+port os.mkdir(adir) args = " -server -txindex=1 -listen -port=" + port + \ " -rpcuser=bitcoinrpc -rpcpassword=P0 -rpcport="+\ str(int(port)-1000) +\ " -datadir=" + adir + " -connect=localhost:" + connects[port] +\ " -regtest -pid="+port+".pid -daemon -debug" os.system("bitcoind" + args) """ Explanation: Bitcoin API This notebook takes the Bitcoin Developers Reference, specifically the Bitcoin API, and demonstrates the use of the Remote Procedure Calls (RPC) via python. We employ the regression testing mode of bitcoin, regtest, to start with a clean blockchain. The notebook orders the RPC methods to illustrate how to use the API. In the Bitcoin Developers Reference the RPC methods are in alphabetical order while the python example code below is in discovery sequence, starting with creating the genesis block, mining our first bitcoins, sending some to another user and showing updates to the blockchain throughout the process. The notebook may be useful for beginners looking for a simple python interface to the Bitcoin API; we use the python bitcoin module created by Peter Todd for its simplicity. As well, the value to the user who is new to python is a simple step by step walk thru of the various methods in the API in an incremental introduction without the confusion of thousands of transactions in a typical block on mainnet. We create two users on the regression test network (regtest): Bob on RPC port 16591 Mary on RPC port 16592. We create independent peers on the same network. Handy shell command to see current nodes: ps aux |grep bitcoin We could add as many nodes to regtest as we like (upper limit?) and connect them in a round-robin way so that node 1 listens to node 2... and node N listens to node 1. For our examples here, we use just two nodes in the dictionary connects. The python script below is designed for Linux (Ubuntu) and would need to be modified for other platforms. End of explanation """ import bitcoin import bitcoin.rpc import bitcoin.core import bitcoin.core.script """ Explanation: Import the python bitcoin module created by Peter Todd. End of explanation """ bitcoin.SelectParams('regtest') Bob = bitcoin.rpc.RawProxy("http://bitcoinrpc:P0@127.0.0.1:16591") Mary = bitcoin.rpc.RawProxy("http://bitcoinrpc:P0@127.0.0.1:16592") info = Bob.getinfo() for key in info.keys(): print key + ' : ' + str(info[key]) """ Explanation: Connect nodes to bitcoin-rpc module We hard-code the username and passwords here for pedagogical ease. We display the blockchain information using the widely used, but deprecated getinfo() RPC. getinfo() returns: version - The version number of this bitcoin-qt or bitcoind program itself. Both of are equivalent. -qt is simply the graphical user interface version protocolversion: The version of the bitcoin network protocol supported by this client (user agent software). walletversion: The version of the wallet.dat file. Wallet.dat contains bitcoin addresses and public & private key pairs for these addresses. There is additional data on the wallet. Care must be taken to not restore from an old wallet backup. New addresses generated in the wallet since the old backup was made will not exist in the old backup! Source balance: The total number of bitcoins held in the wallet.dat file. blocks: The total number of blocks which constitute the shared block chain. timeoffset: Seconds of difference between this node's "wall clock time" and the median reported by our network peers. connections: the number of peers on the bitcoin P2P network that this node is connected to. proxy: If using a proxy to connect to the network, listed here, otherwise blank. difficulty: the current mining difficulty factor. Difficulty is increased as more miners and more hash compute power compete to be the next one to have a block of transactions added to the blockchain. testnet: Boolean value (true OR false). There is a parallel bitcoin network, the testnet, where trials and experiments may be carried out without impacting the official, live bitcoin P2P network keypoololdest: timestamp (UNIX epoch) of the oldest key in the keypool keypoolsize: A number of addresses are kept in reserve by the client. This is the size of that reserve. paytxfee: Specifies what fee the client is willing to pay to expedite transactions. Miners may choose to ignore transactions that do not pay a fee, and these fee-less transactions will have low priority on queue of pending transaction and may linger there for hours. errors: This field may inform of different status conditions. Full list of error codes in source file bitcoinrpc.h (Examples: "Bitcoin not connected", "database error", "Keypool ran out"...) End of explanation """ getnetworkinfo = Bob.getnetworkinfo() print '\ngetnetworkinfo\n' print getnetworkinfo getpeerinfo = Bob.getpeerinfo() print '\ngetpeerinfo\n' print getpeerinfo getconnectioncount = Bob.getconnectioncount() print '\ngetconnectioncount\n' print getconnectioncount getnettotals = Bob.getnettotals() print '\ngetnettotals\n' print getnettotals """ Explanation: Network attributes The following RCPs are used to determine the network attributes. We highlight the informational RPCs and leave the AddNode and GetAddedNodeInfo for later investigation. Network RPCs AddNode: attempts to add or remove a node from the addnode list, or to try a connection to a node once. GetAddedNodeInfo: returns information about the given added node, or all added nodes (except onetry nodes). Only nodes which have been manually added using the addnode RPC will have their information displayed. GetConnectionCount: returns the number of connections to other nodes. GetNetTotals: returns information about network traffic, including bytes in, bytes out, and the current time. GetNetworkInfo: returns information about the node’s connection to the network. New in 0.9.2, Updated in 0.10.0 GetPeerInfo: returns data about each connected network node. Updated in 0.10.0 Ping: sends a P2P ping message to all connected nodes to measure ping time. Results are provided by the getpeerinfo RPC pingtime and pingwait fields as decimal seconds. The P2P ping message is handled in a queue with all other commands, so it measures processing backlog, not just network ping. End of explanation """ blockchaininfo = Bob.getblockchaininfo() getblockcount = Bob.getblockcount() getbestblockhash = Bob.getbestblockhash() getdifficulty = Bob.getdifficulty() getchaintips = Bob.getchaintips() getmempoolinfo = Bob.getmempoolinfo() print '\nblockchaininfo\n' print blockchaininfo print '\ngetblockcount ' + str(getblockcount) print '\ngetbestblockhash\n' print getbestblockhash print '\ngetdifficulty ' + str(getdifficulty) print '\ngetchaintips\n' print getchaintips print '\ngetmempoolinfo\n' print getmempoolinfo print '\n\n' bestblockhash = blockchaininfo['bestblockhash'] blocks = blockchaininfo['blocks'] print '\nblocks = ' + str(blocks) print '\nbestblockhash = ' + str(bestblockhash) """ Explanation: Block Chain RPCs GetBestBlockHash: returns the header hash of the most recent block on the best block chain. New in 0.9.0 GetBlock: gets a block with a particular header hash from the local block database either as a JSON object or as a serialized block. GetBlockChainInfo: provides information about the current state of the block chain. New in 0.9.2, Updated in 0.10.0 GetBlockCount: returns the number of blocks in the local best block chain. GetBlockHash: returns the header hash of a block at the given height in the local best block chain. GetChainTips: returns information about the highest-height block (tip) of each local block chain. New in 0.10.0 GetDifficulty: returns the proof-of-work difficulty as a multiple of the minimum difficulty. GetMemPoolInfo: returns information about the node’s current transaction memory pool. New in 0.10.0 GetRawMemPool: returns all transaction identifiers (TXIDs) in the memory pool as a JSON array, or detailed information about each transaction in the memory pool as a JSON object. GetTxOut: returns details about a transaction output. Only unspent transaction outputs (UTXOs) are guaranteed to be available. GetTxOutProof: returns a hex-encoded proof that one or more specified transactions were included in a block. New in 0.11.0 GetTxOutSetInfo: returns statistics about the confirmed unspent transaction output (UTXO) set. Note that this call may take some time and that it only counts outputs from confirmed transactions—it does not count outputs from the memory pool. VerifyChain: verifies each entry in the local block chain database. VerifyTxOutProof: verifies that a proof points to one or more transactions in a block, returning the transactions the proof commits to and throwing an RPC error if the block is not in our best block chain. New in 0.11.0 End of explanation """ ## N.B. our balance is zero in the genesis block print 'Initial balance, before any mining ' + str(Bob.getbalance()) Bob.generate(101) print 'Balance after mining 101 blocks ' + str(Bob.getbalance()) """ Explanation: Bootstrap Bootstrap some bitcoins so we can spend them. In regtest, you use generate and mine 101 blocks rewarding you with 50 bitcoins in a coinbase transaction. On testnet, you use a faucet to get some bitcoins. On mainnet, you need actual bitcoins. We will stick with regtest for now. End of explanation """ getblockcount = Bob.getblockcount() print '\ngetblockcount = ' + str(getblockcount) getblockhash = Bob.getblockhash(getblockcount) print '\ngetblockhash = ' + str(getblockhash) print '\ngetblock\n' getblock = Bob.getblock(getblockhash) print getblock tx = getblock['tx'] print '\n' + str(len(tx)) + ' Transactions\n' print tx for i in range(len(tx)): print '\nSerialized Transaction #' + str(i) +'\n' serializedTX = Bob.getrawtransaction(tx[i],0) print serializedTX print '\nRaw Transaction\n' rawTX = Bob.getrawtransaction(tx[i],1) print rawTX print '\nDecoded Transaction\n' decodedTX = Bob.decoderawtransaction(serializedTX) print decodedTX """ Explanation: Raw Transaction RPCs CreateRawTransaction: creates an unsigned serialized transaction that spends a previous output to a new output with a P2PKH or P2SH address. The transaction is not stored in the wallet or transmitted to the network. DecodeRawTransaction: decodes a serialized transaction hex string into a JSON object describing the transaction. DecodeScript: decodes a hex-encoded P2SH redeem script. GetRawTransaction: gets a hex-encoded serialized transaction or a JSON object describing the transaction. By default, Bitcoin Core only stores complete transaction data for UTXOs and your own transactions, so the RPC may fail on historic transactions unless you use the non-default txindex=1 in your Bitcoin Core startup settings. SendRawTransaction: validates a transaction and broadcasts it to the peer-to-peer network. SignRawTransaction: signs a transaction in the serialized transaction format using private keys stored in the wallet or provided in the call. Generate some bitcoins with a mining start Now let's look at the block structure, rerunning code from above. Note that the block height is now 101. There is one transaction for 50BTC. Note that this is a coinbase transaction with just a vout and no vin. How to make serialized transaction? End of explanation """ from datetime import datetime as dt #import pytz print rawTX print '\n\n' print 'blockhash = ' + str(rawTX['blockhash']) + '\n' for i in range(len(rawTX['vout'])): spk = rawTX['vout'][i]['scriptPubKey'] print 'vout ' + str(i) + ' : ' + str(spk) + '\n' for field in spk.keys(): #['reqSigs','hex','addresses','asm','type']: print 'vout ' + str(i) + ' ' + field + ' : ' + str(spk[field]) print 'vout ' + str(i) + ' value : ' + str(rawTX['vout'][i]['value']) print 'vout ' + str(i) + ' n : ' + str(rawTX['vout'][i]['n']) print '\nserialized hex = ' + str(rawTX['hex']) print 'Is serialized hex == rawTX["hex"]? ' + str(rawTX['hex']==serializedTX) + '\n' for i in range(len(rawTX['vin'])): spk = rawTX['vin'][i] print 'vin ' + str(i) + ' : ' + str(spk) + '\n' for field in spk.keys(): #['reqSigs','hex','addresses','asm','type']: print 'vin ' + str(i) + ' ' + field + ' : ' + str(spk[field]) print '\n' for field in ['txid','blocktime','version','confirmations','time','locktime']: if field in ['blocktime','time','locktime']: print field + ' = ' + str(rawTX[field]) +\ ' ' + dt.fromtimestamp(rawTX[field]).strftime('%Y-%m-%d:%H%M%S') else: print field + ' = ' + str(rawTX[field]) """ Explanation: Now that we have a raw transaction, let's look at the details. As noted, the first 100 blocks in regtest are blank, necessary for mining our first coinbase reward of 50BTC. This shows in block 101 with a single transaction, denoted rawTX below. rawTX is a JSON object with: blockhash from block 101 vout an array (len=1) of scriptPubKey, a JSON object of reqSigs and a value (50BTC here) vin an array (len=1) with a key called "coinbase", some value and a sequence number We veirfy that the serialized transaction hex is the same as the "hex" entry in the transaction. We note that the txid is repeated in the raw transaction. We also note that the confirmations = 1 (no further mining has been done on this block), that there is a single transaction (the mining of 50btc) and the blocktime and time are the same, while the locktime is the beginning of the epoch. Note that an address has been created, in general an array of addresses, each corresponding to a transaction output (vout). Here, a new address has been created corresponding to the miner's address for the 50BTC reward. End of explanation """ print 'Mary\'s balance = ' + str(Mary.getbalance()) #print 'Mary\'s peers: ' #print Mary.getpeerinfo() getnewaddress = Mary.getnewaddress() print '\nNew address ' + str(getnewaddress) print '\nMary\'s address has received how many BTC? ' +\ str(Mary.getreceivedbyaddress(getnewaddress,0)) ##have Bob (proxy) send 25 bitcoins to Mary txid = Bob.sendtoaddress(getnewaddress,25) getmempoolinfo = Bob.getmempoolinfo() getrawmempool = Bob.getrawmempool(True) print '\ngetmempoolinfo ' + str(getmempoolinfo) print '\ngetrawmempool' print getrawmempool print '\n' for key in getrawmempool.keys(): for field in getrawmempool[key].keys(): print str(field) + ' : ' + str(getrawmempool[key][field]) #print '\ntxid from sendtoaddress output ' + str(txid) print '\nIs the send txid the same as memory pool txid? ****' +\ str(txid == getrawmempool.keys()[0]) + '****' print '\nMary\'s balance before mining = ' + str(Mary.getbalance()) print 'Bob\'s balance before mining = ' + str(Bob.getbalance()) ##how can I see transaction details before mining? print '\nMemory Pool Raw Transaction Data\n' import pprint pprint.pprint(Bob.getrawtransaction(txid,1)) ##N.B. no transaction on the blockchain yet!!! """ Explanation: Let's create an address and send some bitcoins there. Some things to notice: We set up a new proxy, using a different RPC port on regtest Address is NOT on blockchain, it is just a 32 byte hash that is almost certainly unique and never before used. We send some coins to new address, note the transaction structure The mempool shows the pending transaction We need to mine to write the transaction to the blockchain End of explanation """ for i in range(7): Bob.generate(1) getblockcount = Bob.getblockcount() getblockhash = Bob.getblockhash(getblockcount) getblock = Bob.getblock(getblockhash) print 'Block #' + str(getblockcount) + ' Mary\'s balance ' + str(Mary.getbalance()) print 'txids ' + str(getblock['tx']) print ' Mary\'s balance ' + str(Mary.getbalance()) """ Explanation: Bob mines 6 more blocks, after the fifth, the 25BTC sent to Mary is confirmed and shows in her balance. Note that the transaction created in block 101, when Bob sent Mary 25BTC shows up in the first block mined, but the balance isn't updated until some number of blocks have been processed. End of explanation """ print '\nBob\'s Wallet\n' wallet= Bob.getwalletinfo() for key in wallet.keys(): print key + '\t' + str(wallet[key]) print '\nMary\'s Wallet\n' wallet= Mary.getwalletinfo() for key in wallet.keys(): print key + '\t' + str(wallet[key]) import time print '\nMary has ' + str(len(Mary.listtransactions())) + ' transactions from Bob\'s largesse' print '\nMary\'s first address has received how many BTC? ' +\ str(Mary.getreceivedbyaddress( Mary.listtransactions()[0]['address'],0)) #str(Mary.getreceivedbyaddress('mmT3ER6w98jZAKwtTZrr3DSxrchS7fGxKW',0)) print '\nBob has ' + str(len(Bob.listtransactions())) + ' transactions from all that mining' #print Bob.listtransactions() ##let's send Mary some more of Bob's bitcoins so we can see her unconfirmed balance getnewaddress = Mary.getnewaddress() print '\nNew address ' + str(getnewaddress) ##have Bob (proxy) send 0.5 bitcoins to Mary txid = Bob.sendtoaddress(getnewaddress,0.5) time.sleep(2) print '\nMary\'s unconfirmed balance ' + str(Mary.getunconfirmedbalance()) print '\nMary\'s confirmed balance ' + str(Mary.getbalance()) ##let's mine 6 blocks Bob.generate(6) time.sleep(2) print 'After Bob\'s mining' print '\nMary\'s unconfirmed balance ' + str(Mary.getunconfirmedbalance()) print '\nMary\'s confirmed balance ' + str(Mary.getbalance()) """ Explanation: Wallet RPCs Note: the wallet RPCs are only available if Bitcoin Core was built with wallet support, which is the default. AddMultiSigAddress: adds a P2SH multisig address to the wallet. BackupWallet: safely copies wallet.dat to the specified file, which can be a directory or a path with filename. DumpPrivKey: returns the wallet-import-format (WIP) private key corresponding to an address. (But does not remove it from the wallet.) DumpWallet: creates or overwrites a file with all wallet keys in a human-readable format. EncryptWallet: encrypts the wallet with a passphrase. This is only to enable encryption for the first time. After encryption is enabled, you will need to enter the passphrase to use private keys. GetAccountAddress: returns the current Bitcoin address for receiving payments to this account. If the account doesn’t exist, it creates both the account and a new address for receiving payment. Once a payment has been received to an address, future calls to this RPC for the same account will return a different address. GetAccount: returns the name of the account associated with the given address. GetAddressesByAccount: returns a list of every address assigned to a particular account. GetBalance: gets the balance in decimal bitcoins across all accounts or for a particular account. GetNewAddress: returns a new Bitcoin address for receiving payments. If an account is specified, payments received with the address will be credited to that account. GetRawChangeAddress: returns a new Bitcoin address for receiving change. This is for use with raw transactions, not normal use. GetReceivedByAccount: returns the total amount received by addresses in a particular account from transactions with the specified number of confirmations. It does not count coinbase transactions. GetReceivedByAddress: returns the total amount received by the specified address in transactions with the specified number of confirmations. It does not count coinbase transactions. GetTransaction: gets detailed information about an in-wallet transaction. Updated in 0.10.0 GetUnconfirmedBalance: returns the wallet’s total unconfirmed balance. GetWalletInfo: provides information about the wallet. New in 0.9.2 ImportAddress: adds an address or pubkey script to the wallet without the associated private key, allowing you to watch for transactions affecting that address or pubkey script without being able to spend any of its outputs. New in 0.10.0 ImportPrivKey: adds a private key to your wallet. The key should be formatted in the wallet import format created by the dumpprivkey RPC. ImportWallet: imports private keys from a file in wallet dump file format (see the dumpwallet RPC). These keys will be added to the keys currently in the wallet. This call may need to rescan all or parts of the block chain for transactions affecting the newly-added keys, which may take several minutes. KeyPoolRefill: fills the cache of unused pre-generated keys (the keypool). ListAccounts: lists accounts and their balances. Updated in 0.10.0 ListAddressGroupings: lists groups of addresses that may have had their common ownership made public by common use as inputs in the same transaction or from being used as change from a previous transaction. ListLockUnspent: returns a list of temporarily unspendable (locked) outputs. ListReceivedByAccount: lists the total number of bitcoins received by each account. Updated in 0.10.0 ListReceivedByAddress: lists the total number of bitcoins received by each address. Updated in 0.10.0 ListSinceBlock: gets all transactions affecting the wallet which have occurred since a particular block, plus the header hash of a block at a particular depth. Updated in 0.10.0 ListTransactions: returns the most recent transactions that affect the wallet. Updated in 0.10.0 ListUnspent: returns an array of unspent transaction outputs belonging to this wallet. Updated in 0.10.0 LockUnspent: temporarily locks or unlocks specified transaction outputs. A locked transaction output will not be chosen by automatic coin selection when spending bitcoins. Locks are stored in memory only, so nodes start with zero locked outputs and the locked output list is always cleared when a node stops or fails. Move: moves a specified amount from one account in your wallet to another using an off-block-chain transaction. SendFrom: spends an amount from a local account to a bitcoin address. SendMany: creates and broadcasts a transaction which sends outputs to multiple addresses. SendToAddress: spends an amount to a given address. SetAccount: puts the specified address in the given account. SetTxFee: sets the transaction fee per kilobyte paid by transactions created by this wallet. SignMessage: signs a message with the private key of an address. WalletLock: removes the wallet encryption key from memory, locking the wallet. After calling this method, you will need to call walletpassphrase again before being able to call any methods which require the wallet to be unlocked. WalletPassphrase: stores the wallet decryption key in memory for the indicated number of seconds. Issuing the walletpassphrase command while the wallet is already unlocked will set a new unlock time that overrides the old one. WalletPassphraseChange: changes the wallet passphrase from ‘old passphrase’ to ‘new passphrase’. End of explanation """ # Eloipool - Python Bitcoin pool server # Copyright (C) 2011-2012 Luke Dashjr <luke-jr+eloipool@utopios.org> # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Affero General Public License as # published by the Free Software Foundation, either version 3 of the # License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Affero General Public License for more details. # # You should have received a copy of the GNU Affero General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # original code https://github.com/Crypto-Expert/stratum-mining/blob/master/lib/merkletree.py # http://code.runnable.com/U3jqtyYUmAUxtsSS/bitcoin-block-merkle-root-python from hashlib import sha256 from util import doublesha class MerkleTree: def __init__(self, data, detailed=False): self.data = data self.recalculate(detailed) self._hash_steps = None def recalculate(self, detailed=False): L = self.data steps = [] if detailed: detail = [] PreL = [] StartL = 0 else: detail = None PreL = [None] StartL = 2 Ll = len(L) if detailed or Ll > 1: while True: if detailed: detail += L if Ll == 1: break steps.append(L[1]) if Ll % 2: L += [L[-1]] L = PreL + [doublesha(L[i] + L[i + 1]) for i in range(StartL, Ll, 2)] Ll = len(L) self._steps = steps self.detail = detail def hash_steps(self): if self._hash_steps == None: self._hash_steps = doublesha(''.join(self._steps)) return self._hash_steps def withFirst(self, f): steps = self._steps for s in steps: f = doublesha(f + s) return f def merkleRoot(self): return self.withFirst(self.data[0]) # MerkleTree tests def _test(): import binascii import time mt = MerkleTree([None] + [binascii.unhexlify(a) for a in [ '999d2c8bb6bda0bf784d9ebeb631d711dbbbfe1bc006ea13d6ad0d6a2649a971', '3f92594d5a3d7b4df29d7dd7c46a0dac39a96e751ba0fc9bab5435ea5e22a19d', 'a5633f03855f541d8e60a6340fc491d49709dc821f3acb571956a856637adcb6', '28d97c850eaf917a4c76c02474b05b70a197eaefb468d21c22ed110afe8ec9e0', ]]) assert( b'82293f182d5db07d08acf334a5a907012bbb9990851557ac0ec028116081bd5a' == binascii.b2a_hex(mt.withFirst(binascii.unhexlify('d43b669fb42cfa84695b844c0402d410213faa4f3e66cb7248f688ff19d5e5f7'))) ) print '82293f182d5db07d08acf334a5a907012bbb9990851557ac0ec028116081bd5a' txes = [binascii.unhexlify(a) for a in [ 'd43b669fb42cfa84695b844c0402d410213faa4f3e66cb7248f688ff19d5e5f7', '999d2c8bb6bda0bf784d9ebeb631d711dbbbfe1bc006ea13d6ad0d6a2649a971', '3f92594d5a3d7b4df29d7dd7c46a0dac39a96e751ba0fc9bab5435ea5e22a19d', 'a5633f03855f541d8e60a6340fc491d49709dc821f3acb571956a856637adcb6', '28d97c850eaf917a4c76c02474b05b70a197eaefb468d21c22ed110afe8ec9e0', ]] s = time.time() mt = MerkleTree(txes) for x in range(100): y = int('d43b669fb42cfa84695b844c0402d410213faa4f3e66cb7248f688ff19d5e5f7', 16) #y += x coinbasehash = binascii.unhexlify("%x" % y) x = binascii.b2a_hex(mt.withFirst(coinbasehash)) print x print time.time() - s if __name__ == '__main__': _test() """ Explanation: Wallet Programs Permitting receiving and spending of satoshis is the only essential feature of wallet software—but a particular wallet program doesn’t need to do both things. Two wallet programs can work together, one program distributing public keys in order to receive satoshis and another program signing transactions spending those satoshis. Wallet programs also need to interact with the peer-to-peer network to get information from the block chain and to broadcast new transactions. However, the programs which distribute public keys or sign transactions don’t need to interact with the peer-to-peer network themselves. This leaves us with three necessary, but separable, parts of a wallet system: a public key distribution program, a signing program, and a networked program. In the subsections below, we will describe common combinations of these parts. Full-Service Wallets The simplest wallet is a program which performs all three functions: it generates private keys, derives the corresponding public keys, helps distribute those public keys as necessary, monitors for outputs spent to those public keys, creates and signs transactions spending those outputs, and broadcasts the signed transactions. Full-Service Wallets As of this writing, almost all popular wallets can be used as full-service wallets. The main advantage of full-service wallets is that they are easy to use. A single program does everything the user needs to receive and spend satoshis. The main disadvantage of full-service wallets is that they store the private keys on a device connected to the Internet. The compromise of such devices is a common occurrence, and an Internet connection makes it easy to transmit private keys from a compromised device to an attacker. Utility RPCs CreateMultiSig: creates a P2SH multi-signature address. EstimateFee: estimates the transaction fee per kilobyte that needs to be paid for a transaction to be included within a certain number of blocks. New in 0.10.0 EstimatePriority: estimates the priority that a transaction needs in order to be included within a certain number of blocks as a free high-priority transaction. New in 0.10.0 ValidateAddress: returns information about the given Bitcoin address. VerifyMessage: verifies a signed message. This code determines whether a transaction is included in a Merkle Tree. From Luke Dashjr End of explanation """ #!/usr/bin/env python # example of proof-of-work algorithm import hashlib import time max_nonce = 2 ** 32 # 4 billion def proof_of_work(header, difficulty_bits): # calculate the difficulty target target = 2 ** (256-difficulty_bits) for nonce in xrange(max_nonce): hash_result = hashlib.sha256(str(header)+str(nonce)).hexdigest() # check if this is a valid result, below the target if long(hash_result, 16) < target: print "Success with nonce %d" % nonce print "Hash is %s" % hash_result return (hash_result,nonce) print "Failed after %d (max_nonce) tries" % nonce return nonce if __name__ == '__main__': nonce = 0 hash_result = '' # difficulty from 0 to 31 bits for difficulty_bits in xrange(32): difficulty = 2 ** difficulty_bits print "Difficulty: %ld (%d bits)" % (difficulty, difficulty_bits) print "Starting search..." # checkpoint the current time start_time = time.time() # make a new block which includes the hash from the previous block # we fake a block of transactions - just a string new_block = 'test block with transactions' + hash_result # find a valid nonce for the new block (hash_result, nonce) = proof_of_work(new_block, difficulty_bits) # checkpoint how long it took to find a result end_time = time.time() elapsed_time = end_time - start_time print "Elapsed Time: %.4f seconds" % elapsed_time if elapsed_time > 0: # estimate the hashes per second hash_power = float(long(nonce)/elapsed_time) print "Hashing Power: %ld hashes per second" % hash_power """ Explanation: This code snippet illustrates a simplified proof-of-work algorithm not used by miners; by incementing the nonce and trying difficulties from 1-31 bits (2 - 2^32). Note that this runs for a long time, more than 20 minutes on a 4GB-RAM Ubuntu box. From Mastering Bitcoin End of explanation """
ceos-seo/data_cube_notebooks
notebooks/general/Shapefile_Masking.ipynb
apache-2.0
import sys import os sys.path.append(os.environ.get('NOTEBOOK_ROOT')) import matplotlib.pyplot as plt %matplotlib inline from datacube.utils.aws import configure_s3_access configure_s3_access(requester_pays=True) # Import Data Cube API import utils.data_cube_utilities.data_access_api as dc_api api = dc_api.DataAccessApi() dc = api.dc """ Explanation: <a name="composites_and_shapefiles_top"></a> Cloud-Filtered Mosaics and Shapefile Region Selection <hr> This notebook can be used to create Landsat cloud-filtered mosaics for any time period and location. Selecting regions with shapefiles is also demonstrated. The mosaics can be output as GeoTIFF products for analysis in external GIS tools. The following mosaics are possible: * Median = midpoint of spectral data * Geomedian = Australian median product with improved spectral consistency * Most-Recent = most recent clear pixel * Least-Recent = least recent clear pixel * Max-NDVI = maximum vegetation response * Min-NDVI = minimum vegetation response Users should review the "Cloud_Statistics" notebook for more information about the cloud statistics for any given temporal and spatial combination. An understanding of the underlying data is important for creating a valid mosaic for further analyses. <hr> Index Import Dependencies and Connect to the Data Cube Choose Platform and Product Define the Extents of the Analysis Get the Regions Bounded by the Shapefiles Specify the Shapefile Region and Time Range to Load Load the Data and Create the Composite Visualize the Composite Export to GeoTIFF <a id="composites_and_shapefiles_import"></a>Import Dependencies and Connect to the Data Cube &#9652; End of explanation """ # CHANGE HERE >>>>>>>>>>>>>>>>> # Select a Product and Platform # Examples: ghana, kenya, tanzania, sierra_leone, senegal product = 'ls8_usgs_sr_scene' platform = 'LANDSAT_8' collection = 'c1' level = 'l2' # Specify Mosaic Parameters # The mosaic method to use when creating the final composite. # One of ['median', 'geomedian', 'most_recent', 'least_recent', 'max_ndvi', 'min_ndvi'] # The options 'max' and 'min' require a spectral index to be calculated. mosaic_method = 'median' """ Explanation: <span id="composites_and_shapefiles_plat_prod">Choose Platform and Product &#9652;</span> End of explanation """ # Get product extents prod_extents = api.get_query_metadata(platform=platform, product=product, measurements=[]) full_lat = prod_extents['lat_extents'] print("Lat bounds:", full_lat) full_lon = prod_extents['lon_extents'] print("Lon bounds:", full_lon) time_extents = list(map(lambda time: time.strftime('%Y-%m-%d'), prod_extents['time_extents'])) print("Time bounds:", time_extents) # The code below renders a map that can be used to view the region. from utils.data_cube_utilities.dc_display_map import display_map display_map(full_lat, full_lon) """ Explanation: <span id="composites_and_shapefiles_define_extents">Define the Extents of the Analysis &#9652;</span> End of explanation """ import fiona import rasterio import rasterio.mask from shapely.geometry import shape input_shp_root_pth = '../data/Ghana/smallest_biggest_district/' big_distr_shp_pth = input_shp_root_pth + 'biggest_district.shp' sml_distr_shp_pth = input_shp_root_pth + 'smallest_district.shp' """ Explanation: <span id="composites_and_shapefiles_shapefiles_region_bounded">Get the Regions Bounded by the Shapefiles &#9652;</span> End of explanation """ from utils.data_cube_utilities.dc_display_map import display_map with fiona.open(big_distr_shp_pth, 'r') as src: # create a shapely geometry geometry = shape(src[0]['geometry']) # get the bounding box of the shapefile geometry latitude, longitude = [[None]*2, [None]*2] longitude[0], latitude[0] = geometry.bounds[0:2] longitude[1], latitude[1] = geometry.bounds[2:4] display_map(latitude,longitude) """ Explanation: Showing First Shapefile Region ('biggest_district.shp') End of explanation """ with fiona.open(sml_distr_shp_pth, 'r') as src: # create a shapely geometry geometry = shape(src[0]['geometry']) # get the bounding box of the shapefile geometry latitude, longitude = [[None]*2, [None]*2] longitude[0], latitude[0] = geometry.bounds[0:2] longitude[1], latitude[1] = geometry.bounds[2:4] display_map(latitude,longitude) """ Explanation: Showing Second Shapefile Region ('smallest_district.shp') End of explanation """ # Path to shapefile to load data for - one of [big_distr_shp_pth, sml_distr_shp_pth]. # Note that `big_distr_shp_pth` is very resource intensive to process because it is a large area. shp_file_pth_to_load = sml_distr_shp_pth # Time period time_extents = ('2017-01-01', '2017-12-31') """ Explanation: <span id="composites_and_shapefiles_shapefiles_region_select">Specify the Shapefile Region and Time Range to Load &#9652;</span> End of explanation """ import numpy as np import warnings with fiona.open(shp_file_pth_to_load, 'r') as src: # create a shapely geometry # this is done for the convenience for the .bounds property only shp_geom = shape(src[0]['geometry']) # get the bounding box of the shapefile geometry latitude, longitude = [[None]*2, [None]*2] longitude[0], latitude[0] = shp_geom.bounds[0:2] longitude[1], latitude[1] = shp_geom.bounds[2:4] # load data for the bounding box of the shapefile geometry measurements = ['red', 'green', 'blue', 'pixel_qa'] if mosaic_method in ['max_ndvi', 'min_ndvi']: measurements += ['nir'] dataset = dc.load(latitude = latitude, longitude = longitude, platform = platform, time = time_extents, product = product, measurements = measurements, dask_chunks={'time':1, 'latitude':1000, 'longitude':1000}) # mask out clouds and invalid data from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full clean_mask = landsat_clean_mask_full(dc, dataset, product=product, platform=platform, collection=collection, level=level) cleaned_dataset = dataset.where(clean_mask) # rasterize the shapefile geometry to a boolean mask from datacube.utils import geometry src_crs = src.crs_wkt if src.crs_wkt != '' else "EPSG:4326" crs = geometry.CRS(src_crs) first_geometry = src[0]['geometry'] geom = geometry.Geometry(first_geometry, crs=crs) geobox = dataset.geobox shp_mask = rasterio.features.geometry_mask( [geom.to_crs(geobox.crs)], out_shape=geobox.shape, transform=geobox.affine, all_touched=True, invert=True) # Create the final composite. final_dataset = cleaned_dataset.where(shp_mask) with warnings.catch_warnings(): warnings.simplefilter("ignore") if mosaic_method == 'median': from utils.data_cube_utilities.dc_mosaic import create_median_mosaic composite = create_median_mosaic(cleaned_dataset, clean_mask) elif mosaic_method == 'geomedian': from utils.data_cube_utilities.dc_mosaic import create_hdmedians_multiple_band_mosaic composite = create_hdmedians_multiple_band_mosaic(cleaned_dataset, clean_mask) elif mosaic_method == 'most_recent': from utils.data_cube_utilities.dc_mosaic import create_mosaic composite = create_mosaic(cleaned_dataset, clean_mask) elif mosaic_method == 'least_recent': from utils.data_cube_utilities.dc_mosaic import create_mosaic composite = create_mosaic(cleaned_dataset, clean_mask, reverse_time=True) elif mosaic_method == 'max_ndvi': from utils.data_cube_utilities.dc_mosaic import create_max_ndvi_mosaic composite = create_max_ndvi_mosaic(cleaned_dataset, clean_mask) elif mosaic_method == 'min_ndvi': from utils.data_cube_utilities.dc_mosaic import create_min_ndvi_mosaic composite = create_min_ndvi_mosaic(cleaned_dataset, clean_mask) final_composite = composite.where(shp_mask) """ Explanation: <span id="composites_and_shapefiles_retrieve_data">Load the Data and Create the Composite &#9652;</span> End of explanation """ from utils.data_cube_utilities.dc_rgb import rgb fig = plt.figure(figsize=(10,10)) rgb(final_composite, fig=fig) plt.show() """ Explanation: <span id="composites_and_shapefiles_visualize_composite">Visualize the Composite &#9652;</span> End of explanation """ from utils.data_cube_utilities.import_export import export_slice_to_geotiff size_str = 'small_dist' if shp_file_pth_to_load == sml_distr_shp_pth else 'big_dist' output_dir = 'output/geotiffs/DCAL_Custom_Mosaics_and_Shapefiles' if not os.path.exists(output_dir): os.makedirs(output_dir) export_slice_to_geotiff(composite, output_dir + '/{}_{}_composite.tif'.format(size_str, mosaic_method)) !ls -lah output/geotiffs/DCAL_Custom_Mosaics_and_Shapefiles """ Explanation: <span id="composites_and_shapefiles_export_to_geotiff">Export to GeoTIFF &#9652;</span> End of explanation """
zzsza/TIL
python/image processing.ipynb
mit
from PIL import Image import numpy as np def average_hash(fname, size = 16): img = Image.open(fname) img = img.convert('L') # 1을 지정하면 이진화, RGB, RGBA, CMYK 등의 모드도 지원 img = img.resize((size, size), Image.ANTIALIAS) pixel_data = img.getdata() pixels = np.array(pixel_data) pixels = pixels.reshape((size, size)) avg = pixels.mean() diff = 1 * (pixels > avg) return diff def np2hash(ahash): bhash = [] for nl in ahash.tolist(): s1 = [str(i)for i in nl] s2 = ''.join(s1) i = int(s2, 2) bhash.append('%04x' % i) return ''.join(bhash) ahash = average_hash('eiffel_tower.jpeg') print(ahash) print(np2hash(ahash)) """ Explanation: image processing Average Hash 이미지를 비교 가능한 해시 값으로 나타낸 것 해시 함수 MD5, SHA256 등을 이용해 데이터 값을 간단한 해시 값으로 변환할 수 있음 이미지가 비슷한지 등을 검출할 때는 해시함수를 사용하면 안됨. 해상도 크기 조정, 색조 보정, 압축 형식 변경 등으로 해시값이 달라짐 End of explanation """ import os, re search_dir = "./image/101_ObjectCategories/" cache_dir = "./image/cache_avhash" if not os.path.exists(cache_dir): os.mkdir(cache_dir) def average_hash(fname, size = 16): fname2 = fname[len(search_dir):] # image cache cache_file = cache_dir + "/" + fname2.replace('/','_') + '.csv' if not os.path.exists(cache_file): img = Image.open(fname) img = img.convert('L').resize((size, size), Image.ANTIALIAS) pixels = np.array(img.getdata()).reshape((size,size)) avg = pixels.mean() px = 1 * (pixels > avg) np.savetxt(cache_file, px, fmt="%.0f", delimiter=",") else: px = np.loadtxt(cache_file, delimiter=",") return px def hamming_dist(a, b): aa = a.reshape(1, -1) ab = b.reshape(1, -1) dist = (aa != ab).sum() return dist def enum_all_files(path): for root, dirs, files in os.walk(path): for f in files: fname = os.path.join(root, f) if re.search(r'\.(jpg|jpeg|pnp)$', fname): yield fname def find_image(fname, rate): src = average_hash(fname) for fname in enum_all_files(search_dir): dst = average_hash(fname) diff_r = hamming_dist(src, dst) / 256 if diff_r < rate: yield (diff_r, fname) srcfile = search_dir + "/chair/image_0016.jpg" html = "" sim = list(find_image(srcfile, 0.25)) sim = sorted(sim, key = lambda x: x[0]) for r, f in sim: print(r, ">", f) s = '<div style="float:left;"><h3>[ 차이 : ' + str(r) + '-' + os.path.basename(f) + ']</h3>' + \ '<p><a herf="' + f + '"><img src="' + f + '" width=400>' + '</a></p></div>' html += s html = """<html><head><meta charset="utf8">/head> <body><h3> 원래 이미지 </h3><p> <img src = '{0}' width=400></p>{1}</body></html>""".format(srcfile, html) with open("./avgash-search-output.html", "w", encoding="utf-8") as f: f.write(html) print("ok") """ Explanation: Caltech 101 데이터 해밍 거리 : 같은 문자수를 가진 2개의 문자열에서 대응하는 위치에 있는 문자 중 다른 것의 개수 256글자의 해시값 중 얼마나 다른지 찾고 이 기반으로 이미지 차이를 구분 End of explanation """ from PIL import Image import os, glob import numpy as np from sklearn.model_selection import train_test_split # 분류 대상 카테고리 선택 caltech_dir """ Explanation: CNN 일정한 크기로 리사이즈한 후, 24비트 RGB 형식으로 변환 -> Numpy 배열로 저장 End of explanation """
gVallverdu/cookbook
matplotlibrc.ipynb
gpl-2.0
import matplotlib import matplotlib.style as mpl_style """ Explanation: Matplotlib style sheets This notebook presents how to change the style or appearance of matplotlib plots. In additiopn to the rcParams dictionnary, the matplotlib.style module provides facilities for style sheets utilisation with matplotlib. Look at this page of the matplotlib documentation to know how it works in details. Hereafter, I present how to load a style and I give you a style sheet I use for my plots. How it works End of explanation """ mpl_style.available """ Explanation: The mpl_style.available attribute outputs the available styles. Since matplotlib version 2 and higher, you can load for example seaborn styles directly in matplotlib. End of explanation """ mpl_style.use("seaborn-dark") """ Explanation: If you want to use one specific style, you simply have to load it, for example for the seaborn-dark style, using: End of explanation """ mpl_style.use("seaborn-dark") mpl_style.use(["seaborn-ticks", "seaborn-dark"]) """ Explanation: Or you can also combine serveral styles: End of explanation """ matplotlib.get_configdir() """ Explanation: Custom styles There are several ways to change globaly or temporary the plot default parameters. Use rcParams dictionnary You can directly modify parameters of the rcParams dictionnary, for example at the beginning of a notebook, in order to apply a style to all the plots. For example, the following changes the size of the plots, the font size and ask for a grid to be drawn: python plt.rcParams["figure.figsize"] = (10, 6) plt.rcParams["font.size"] = 20 plt.rcParams["axes.grid"] = True Use matplotlibrc You can download from here the matplotlibrc file. This file contains all the style options of the plots. Modify the file as you whish in order to change the default parameters. Look at this page in order to know where you have to save the file. If you want the parameters to be limited to a specific location, copy the matplotlibrc file in your working directory. Then, only the plots you will create in this directory will be affected by these parameters. Create your own style sheet A matplotlib style sheet has the same format and syntax as the matplotlibrc file. You simply have to put in this file the specific parameters you need. Suppose that at the beginning of all your jupyter notebook, you always modify the same keys of the rcParams dictionnary. You can consider to write these parameters in a style sheet and load that parameters at the beginning of the notebook. In order to know where you have to save your style sheet, you can run the following command. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt import numpy as np """ Explanation: The style sheets have to be in a subdirectory called stylelib of the above location. You have to save the style sheets with names such as my_style.mplstyle. Hereafter is the style sheet I wrote to produce figures for scientific publications. This is not a perfect style sheet but a not too bad working example. Just save it put your own touch. my style sheet ```matplotlibrc MATPLOTLIBRC FORMAT matplotlib configuration for plots for publications. FIGURE figure.figsize : 11.67, 8.27 savefig.dpi : 300 savefig.bbox : tight FONT font.size : 24 font.family : serif LaTeX mathtext.default : regular AXES axes.linewidth : 2 axes.grid : True TICKS xtick.direction : in xtick.top : True xtick.major.width : 2 xtick.major.size : 10 xtick.minor.visible : True xtick.minor.width : 2 xtick.minor.size : 5 ytick.direction : in ytick.right : True ytick.major.width : 2 ytick.major.size : 10 ytick.minor.visible : True ytick.minor.width : 2 ytick.minor.size : 5 ``` Example End of explanation """ x = np.random.uniform(0, 10, 30) x.sort() yt = 2 * x + 1 y = yt + np.random.normal(loc=0, scale=2, size=y.size) plt.plot(x, yt, label="model") plt.plot(x, y, "o") plt.xlabel("x values (unit)") plt.ylabel("y values (unit)") plt.title("A plot") """ Explanation: Plot a simple linear function as if it was a model of some experimental data. End of explanation """ with plt.style.context(('publi')): plt.plot(x, yt, label="model") plt.plot(x, y, "o") plt.xlabel("x values (unit)") plt.ylabel("y values (unit)") plt.title("A plot") """ Explanation: Now, supposed you have saved you style sheet with as publi.mplstyle in the right directory. You can load the style and draw the plot: End of explanation """
FluVigilanciaBR/fludashboard
Notebooks/Brazilian_epiweek.ipynb
gpl-3.0
from episem import episem """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a data-toc-modified-id="Using-Brazilian-epidemiological-week-definition-1" href="#Using-Brazilian-epidemiological-week-definition"><span class="toc-item-num">1&nbsp;&nbsp;</span>Using Brazilian epidemiological week definition</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Import-module-episem-11" href="#Import-module-episem"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Import module episem</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Example-1:-2016-10-31-12" href="#Example-1:-2016-10-31"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Example 1: 2016-10-31</a></div><div class="lev3 toc-item"><a data-toc-modified-id="Passing-string-121" href="#Passing-string"><span class="toc-item-num">1.2.1&nbsp;&nbsp;</span>Passing string</a></div><div class="lev3 toc-item"><a data-toc-modified-id="Passing-datetime.datetime-122" href="#Passing-datetime.datetime"><span class="toc-item-num">1.2.2&nbsp;&nbsp;</span>Passing datetime.datetime</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Example-2:-2016-01-01-13" href="#Example-2:-2016-01-01"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Example 2: 2016-01-01</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Example-3:-2017-01-01-14" href="#Example-3:-2017-01-01"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Example 3: 2017-01-01</a></div><div class="lev2 toc-item"><a data-toc-modified-id="Comparing-with-isocalendar-15" href="#Comparing-with-isocalendar"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Comparing with isocalendar</a></div> # Using Brazilian epidemiological week definition Brazilian epidemiological week (epiweek) is definied from Sunday to Saturday. Wednesday is defined as the turning point for changing from one year to another and deciding wether Jan 1st is inlcuded in the first epiweek of the new year or still in the last epiweek of the previous one. That is, if the weekday of Jan 1st is between Sunday to Wednesday (included), then it is epiweek 01 of the new year. If the weekday is between Thursday-Saturday, then it falls in the last epiweek of the previous year (typically epiweek 52). The function epiweek takes both information into account to return the epiweek relative to a given date. Input can be a string in the format YYYY-MM-DD or type datetime.datetime See episem.py for details ## Import module episem End of explanation """ d = '2010-10-01' episem(d) episem(d,out='W') """ Explanation: Example 1: 2016-10-31 Function episem can take 1 to 3 inputs: episem(x, sep='W', out='YW') ''' @param x: date in format string YYYY-MM-DD or as datetime.datetime @param sep: separator character @param out: returned info. Y returns epiyear alone, W returns only epiweek and YW returns epiyear and epiweek separated by sep returns str ''' Passing string End of explanation """ import datetime datetime.datetime.strptime(d, '%Y-%m-%d') dt = datetime.datetime.strptime(d, '%Y-%m-%d') episem(dt) """ Explanation: Passing datetime.datetime End of explanation """ dt2 = datetime.datetime.strptime('2016-01-01', '%Y-%m-%d') dt2.isoweekday() """ Explanation: Example 2: 2016-01-01 2016-10-01 was a Friday, as can be seen by isoweekday function, which returns 1 for Monday and 7 for Sunday: End of explanation """ episem(dt2) """ Explanation: Therefore, according to Brazilian epiweek system, it should fall on the last epiweek of year 2105: End of explanation """ dt3 = datetime.datetime.strptime('2017-01-01', '%Y-%m-%d') dt3.isoweekday() episem(dt3) """ Explanation: Example 3: 2017-01-01 2017-10-01 is a Sunday. Therefore, it should fall on the first epiweek of 2017: End of explanation """ print('Date: %s\nISO-calendar: %s\nBR-epiweek: %s\n' % (dt.date(), dt.isocalendar(), episem(dt))) print('Date: %s\nISO-calendar: %s\nBR-epiweek: %s\n' % (dt2.date(), dt2.isocalendar(), episem(dt2))) print('Date: %s\nISO-calendar: %s\nBR-epiweek: %s\n' % (dt3.date(), dt3.isocalendar(), episem(dt3))) """ Explanation: Comparing with isocalendar The week of the year, according to ISO calendar, is calculated slightly different from the Brazilian epiweek system. Thefore, although for some days of the year it can be the same numeral, for others the results can differ. In particular, the base year change and the introduction or not of week 53 are very sensitive. We show here the comparison between the two systems for the dates already used to highlight this issue. The function isocalendar returns the base year, the week and the weekday for a given date. End of explanation """
xunilrj/sandbox
courses/IMTx-Queue-Theory/Week2_Lab_MM1.ipynb
apache-2.0
%matplotlib inline from pylab import * lambda_ = 4 mu = 5 ################### # Write a function that computes the probability Pa that the next event # is an arrival (when the system is not empty) def Pa(lambda_,mu): return lambda_/(mu+lambda_) ################### V1 = Pa(lambda_,mu) """ Explanation: <p><font size="5"> MOOC: Understanding queues</font></p> <p><font size="5"> Python Lab </p> </br> <p><font size="5"> Week II: M/M/1 queue simulation </p> </br> In this lab, we are going to simulate the evolution of the number of customers in a M/M/1 queue. Let $\lambda$ and $\mu$ represent the arrival and departure rates. We simulate the following events: arrival of a new client in the system, or departure of a client from the system. Additionally, we record the value of the number of customers in the system at these instants. 1) We assume that the system is not empty. For $\lambda=4$ and $\mu=5$, what is the probability $Pa$ that the next event is an arrival? End of explanation """ ################### # Supply the rate of the exponential distribution # that represents the time until the next event (departure or arrival) # if the system is not empty def Rate(lambda_,mu): return lambda_+mu ################### V2 = Rate(lambda_,mu) """ Explanation: 2) Assume that the system is not empty. The time before the next event (departure or arrival) follows an exponential distribution. What is the rate of this exponential distribution? End of explanation """ def generate_MM1(lambda_=4,mu=5,N0=5,Tmax=200): """ function generate_MM1(N0 = 5,Tmax=200) generates an MM1 file INPUTS ------ lambda, mu: arrival and departure rates N0: initial state of the system (default = 5) Tmax: duration of the observation (default = 200) OUTPUTS ------- T: list of time of events (arrivals or departures) over [0,T] N: list of system states (at T(t): N->N+1 or N->N-1) """ seed(20) tau = 0 # initial instant T = [0] # list of instants of events N = [N0] # initial state of the system, list of state evolutions while T[-1]<Tmax: if N[-1]==0: tau = -1./lambda_*log(rand()) # inter-event time when N(t)=0 event = 1 # arrival else: tau = -1./Rate(lambda_,mu)*log(rand()) # inter-event time when N(t)>0 event = 2*(rand()<Pa(lambda_,mu))-1 # +1 for an arrival (with probability Pa), -1 for a departure N = N + [N[-1]+event] T = T + [T[-1]+tau] T = T[:-1] # event after Tmax is discarded N = N[:-1] return T,N # Plotting the number of clients in the system T,N = generate_MM1() rcParams['figure.figsize'] = [15,3] plot(T,N,'.b') xlabel('Time') ylabel('Number of customers') """ Explanation: 3) The implementation of the function generate_MM1(lambda_=4, mu=5, N0 = 5, Tmax=200) with entries lambda, mu: arrival and departure rates N0: initial number of customers in the system Tmax: time interval over which the evolution of the queue is simulated and outputs T: vector of instants of events (arrivals or departures) over [0,Tmax] N: vector of the number of customers in the system at instants in T is given below. Execute this code to plot the evolution the number of clients in the system against time. End of explanation """ T,N = generate_MM1(lambda_=4,mu=3) rcParams['figure.figsize'] = [15,3] plot(T,N,'.b') xlabel('Time') ylabel('Number of customers') ##################### # Supply the number of customers at Tmax n = N[-1] print('At Tmax, N={}'.format(n)) ##################### V3 = n """ Explanation: 4) Letting now $\lambda=4$ and $\mu=3$, what do you notice when running the function generate_MM1? What is the value of the number of customers at $Tmax=200$? End of explanation """ print("---------------------------\n" +"RESULTS SUPPLIED FOR LAB 2:\n" +"---------------------------") results = ("V"+str(k) for k in range(1,4)) for x in results: try: print(x+" = {0:.2f}".format(eval(x))) except: print(x+": variable is undefined") """ Explanation: Your answers for the exercise End of explanation """
AllenDowney/ThinkBayes2
soln/chap02.ipynb
mit
import pandas as pd table = pd.DataFrame(index=['Bowl 1', 'Bowl 2']) """ Explanation: Bayes's Theorem Think Bayes, Second Edition Copyright 2020 Allen B. Downey License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) In the previous chapter, we derived Bayes's Theorem: $$P(A|B) = \frac{P(A) P(B|A)}{P(B)}$$ As an example, we used data from the General Social Survey and Bayes's Theorem to compute conditional probabilities. But since we had the complete dataset, we didn't really need Bayes's Theorem. It was easy enough to compute the left side of the equation directly, and no easier to compute the right side. But often we don't have a complete dataset, and in that case Bayes's Theorem is more useful. In this chapter, we'll use it to solve several more challenging problems related to conditional probability. The Cookie Problem We'll start with a thinly disguised version of an urn problem: Suppose there are two bowls of cookies. Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies. Now suppose you choose one of the bowls at random and, without looking, choose a cookie at random. If the cookie is vanilla, what is the probability that it came from Bowl 1? What we want is the conditional probability that we chose from Bowl 1 given that we got a vanilla cookie, $P(B_1 | V)$. But what we get from the statement of the problem is: The conditional probability of getting a vanilla cookie, given that we chose from Bowl 1, $P(V | B_1)$ and The conditional probability of getting a vanilla cookie, given that we chose from Bowl 2, $P(V | B_2)$. Bayes's Theorem tells us how they are related: $$P(B_1|V) = \frac{P(B_1)~P(V|B_1)}{P(V)}$$ The term on the left is what we want. The terms on the right are: $P(B_1)$, the probability that we chose Bowl 1, unconditioned by what kind of cookie we got. Since the problem says we chose a bowl at random, we assume $P(B_1) = 1/2$. $P(V|B_1)$, the probability of getting a vanilla cookie from Bowl 1, which is 3/4. $P(V)$, the probability of drawing a vanilla cookie from either bowl. To compute $P(V)$, we can use the law of total probability: $$P(V) = P(B_1)~P(V|B_1) ~+~ P(B_2)~P(V|B_2)$$ Plugging in the numbers from the statement of the problem, we have $$P(V) = (1/2)~(3/4) ~+~ (1/2)~(1/2) = 5/8$$ We can also compute this result directly, like this: Since we had an equal chance of choosing either bowl and the bowls contain the same number of cookies, we had the same chance of choosing any cookie. Between the two bowls there are 50 vanilla and 30 chocolate cookies, so $P(V) = 5/8$. Finally, we can apply Bayes's Theorem to compute the posterior probability of Bowl 1: $$P(B_1|V) = (1/2)~(3/4)~/~(5/8) = 3/5$$ This example demonstrates one use of Bayes's theorem: it provides a way to get from $P(B|A)$ to $P(A|B)$. This strategy is useful in cases like this where it is easier to compute the terms on the right side than the term on the left. Diachronic Bayes There is another way to think of Bayes's theorem: it gives us a way to update the probability of a hypothesis, $H$, given some body of data, $D$. This interpretation is "diachronic", which means "related to change over time"; in this case, the probability of the hypotheses changes as we see new data. Rewriting Bayes's theorem with $H$ and $D$ yields: $$P(H|D) = \frac{P(H)~P(D|H)}{P(D)}$$ In this interpretation, each term has a name: $P(H)$ is the probability of the hypothesis before we see the data, called the prior probability, or just prior. $P(H|D)$ is the probability of the hypothesis after we see the data, called the posterior. $P(D|H)$ is the probability of the data under the hypothesis, called the likelihood. $P(D)$ is the total probability of the data, under any hypothesis. Sometimes we can compute the prior based on background information. For example, the cookie problem specifies that we choose a bowl at random with equal probability. In other cases the prior is subjective; that is, reasonable people might disagree, either because they use different background information or because they interpret the same information differently. The likelihood is usually the easiest part to compute. In the cookie problem, we are given the number of cookies in each bowl, so we can compute the probability of the data under each hypothesis. Computing the total probability of the data can be tricky. It is supposed to be the probability of seeing the data under any hypothesis at all, but it can be hard to nail down what that means. Most often we simplify things by specifying a set of hypotheses that are: Mutually exclusive, which means that only one of them can be true, and Collectively exhaustive, which means one of them must be true. When these conditions apply, we can compute $P(D)$ using the law of total probability. For example, with two hypotheses, $H_1$ and $H_2$: $$P(D) = P(H_1)~P(D|H_1) + P(H_2)~P(D|H_2)$$ And more generally, with any number of hypotheses: $$P(D) = \sum_i P(H_i)~P(D|H_i)$$ The process in this section, using data and a prior probability to compute a posterior probability, is called a Bayesian update. Bayes Tables A convenient tool for doing a Bayesian update is a Bayes table. You can write a Bayes table on paper or use a spreadsheet, but in this section I'll use a Pandas DataFrame. First I'll make empty DataFrame with one row for each hypothesis: End of explanation """ table['prior'] = 1/2, 1/2 table """ Explanation: Now I'll add a column to represent the priors: End of explanation """ table['likelihood'] = 3/4, 1/2 table """ Explanation: And a column for the likelihoods: End of explanation """ table['unnorm'] = table['prior'] * table['likelihood'] table """ Explanation: Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1: The chance of getting a vanilla cookie from Bowl 1 is 3/4. The chance of getting a vanilla cookie from Bowl 2 is 1/2. You might notice that the likelihoods don't add up to 1. That's OK; each of them is a probability conditioned on a different hypothesis. There's no reason they should add up to 1 and no problem if they don't. The next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods: End of explanation """ prob_data = table['unnorm'].sum() prob_data """ Explanation: I call the result unnorm because these values are the "unnormalized posteriors". Each of them is the product of a prior and a likelihood: $$P(B_i)~P(D|B_i)$$ which is the numerator of Bayes's Theorem. If we add them up, we have $$P(B_1)~P(D|B_1) + P(B_2)~P(D|B_2)$$ which is the denominator of Bayes's Theorem, $P(D)$. So we can compute the total probability of the data like this: End of explanation """ table['posterior'] = table['unnorm'] / prob_data table """ Explanation: Notice that we get 5/8, which is what we got by computing $P(D)$ directly. And we can compute the posterior probabilities like this: End of explanation """ table2 = pd.DataFrame(index=[6, 8, 12]) """ Explanation: The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly. As a bonus, we also get the posterior probability of Bowl 2, which is 0.4. When we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. This process is called "normalization", which is why the total probability of the data is also called the "normalizing constant". The Dice Problem A Bayes table can also solve problems with more than two hypotheses. For example: Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. What is the probability that I chose the 6-sided die? In this example, there are three hypotheses with equal prior probabilities. The data is my report that the outcome is a 1. If I chose the 6-sided die, the probability of the data is 1/6. If I chose the 8-sided die, the probability is 1/8, and if I chose the 12-sided die, it's 1/12. Here's a Bayes table that uses integers to represent the hypotheses: End of explanation """ from fractions import Fraction table2['prior'] = Fraction(1, 3) table2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12) table2 """ Explanation: I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers. End of explanation """ def update(table): """Compute the posterior probabilities.""" table['unnorm'] = table['prior'] * table['likelihood'] prob_data = table['unnorm'].sum() table['posterior'] = table['unnorm'] / prob_data return prob_data """ Explanation: Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function: End of explanation """ prob_data = update(table2) """ Explanation: And call it like this. End of explanation """ table2 """ Explanation: Here is the final Bayes table: End of explanation """ table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3']) table3['prior'] = Fraction(1, 3) table3 """ Explanation: The posterior probability of the 6-sided die is 4/9, which is a little more than the probabilities for the other dice, 3/9 and 2/9. Intuitively, the 6-sided die is the most likely because it had the highest likelihood of producing the outcome we saw. The Monty Hall Problem Next we'll use a Bayes table to solve one of the most contentious problems in probability. The Monty Hall problem is based on a game show called Let's Make a Deal. If you are a contestant on the show, here's how the game works: The host, Monty Hall, shows you three closed doors -- numbered 1, 2, and 3 -- and tells you that there is a prize behind each door. One prize is valuable (traditionally a car), the other two are less valuable (traditionally goats). The object of the game is to guess which door has the car. If you guess right, you get to keep the car. Suppose you pick Door 1. Before opening the door you chose, Monty opens Door 3 and reveals a goat. Then Monty offers you the option to stick with your original choice or switch to the remaining unopened door. To maximize your chance of winning the car, should you stick with Door 1 or switch to Door 2? To answer this question, we have to make some assumptions about the behavior of the host: Monty always opens a door and offers you the option to switch. He never opens the door you picked or the door with the car. If you choose the door with the car, he chooses one of the other doors at random. Under these assumptions, you are better off switching. If you stick, you win $1/3$ of the time. If you switch, you win $2/3$ of the time. If you have not encountered this problem before, you might find that answer surprising. You would not be alone; many people have the strong intuition that it doesn't matter if you stick or switch. There are two doors left, they reason, so the chance that the car is behind Door A is 50%. But that is wrong. To see why, it can help to use a Bayes table. We start with three hypotheses: the car might be behind Door 1, 2, or 3. According to the statement of the problem, the prior probability for each door is 1/3. End of explanation """ table3['likelihood'] = Fraction(1, 2), 1, 0 table3 """ Explanation: The data is that Monty opened Door 3 and revealed a goat. So let's consider the probability of the data under each hypothesis: If the car is behind Door 1, Monty chooses Door 2 or 3 at random, so the probability he opens Door 3 is $1/2$. If the car is behind Door 2, Monty has to open Door 3, so the probability of the data under this hypothesis is 1. If the car is behind Door 3, Monty does not open it, so the probability of the data under this hypothesis is 0. Here are the likelihoods. End of explanation """ update(table3) table3 """ Explanation: Now that we have priors and likelihoods, we can use update to compute the posterior probabilities. End of explanation """ # Solution table4 = pd.DataFrame(index=['Normal', 'Trick']) table4['prior'] = 1/2 table4['likelihood'] = 1/2, 1 update(table4) table4 """ Explanation: After Monty opens Door 3, the posterior probability of Door 1 is $1/3$; the posterior probability of Door 2 is $2/3$. So you are better off switching from Door 1 to Door 2. As this example shows, our intuition for probability is not always reliable. Bayes's Theorem can help by providing a divide-and-conquer strategy: First, write down the hypotheses and the data. Next, figure out the prior probabilities. Finally, compute the likelihood of the data under each hypothesis. The Bayes table does the rest. Summary In this chapter we solved the Cookie Problem using Bayes's theorem explicitly and using a Bayes table. There's no real difference between these methods, but the Bayes table can make it easier to compute the total probability of the data, especially for problems with more than two hypotheses. Then we solved the Dice Problem, which we will see again in the next chapter, and the Monty Hall problem, which you might hope you never see again. If the Monty Hall problem makes your head hurt, you are not alone. But I think it demonstrates the power of Bayes's Theorem as a divide-and-conquer strategy for solving tricky problems. And I hope it provides some insight into why the answer is what it is. When Monty opens a door, he provides information we can use to update our belief about the location of the car. Part of the information is obvious. If he opens Door 3, we know the car is not behind Door 3. But part of the information is more subtle. Opening Door 3 is more likely if the car is behind Door 2, and less likely if it is behind Door 1. So the data is evidence in favor of Door 2. We will come back to this notion of evidence in future chapters. In the next chapter we'll extend the Cookie Problem and the Dice Problem, and take the next step from basic probability to Bayesian statistics. But first, you might want to work on the exercises. Exercises Exercise: Suppose you have two coins in a box. One is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides. You choose a coin at random and see that one of the sides is heads. What is the probability that you chose the trick coin? End of explanation """ # Solution table5 = pd.DataFrame(index=['GG', 'GB', 'BG', 'BB']) table5['prior'] = 1/4 table5['likelihood'] = 1, 1, 1, 0 update(table5) table5 """ Explanation: Exercise: Suppose you meet someone and learn that they have two children. You ask if either child is a girl and they say yes. What is the probability that both children are girls? Hint: Start with four equally likely hypotheses. End of explanation """ # Solution # If the car is behind Door 1, Monty would always open Door 2 # If the car was behind Door 2, Monty would have opened Door 3 # If the car is behind Door 3, Monty would always open Door 2 table6 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3']) table6['prior'] = 1/3 table6['likelihood'] = 1, 0, 1 update(table6) table6 # Solution # If the car is behind Door 1, Monty would have opened Door 2 # If the car is behind Door 2, Monty would always open Door 3 # If the car is behind Door 3, Monty would have opened Door 2 table7 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3']) table7['prior'] = 1/3 table7['likelihood'] = 0, 1, 0 update(table7) table7 """ Explanation: Exercise: There are many variations of the Monty Hall problem. For example, suppose Monty always chooses Door 2 if he can, and only chooses Door 3 if he has to (because the car is behind Door 2). If you choose Door 1 and Monty opens Door 2, what is the probability the car is behind Door 3? If you choose Door 1 and Monty opens Door 3, what is the probability the car is behind Door 2? End of explanation """ # Solution # Hypotheses: # A: yellow from 94, green from 96 # B: yellow from 96, green from 94 table8 = pd.DataFrame(index=['A', 'B']) table8['prior'] = 1/2 table8['likelihood'] = 0.2*0.2, 0.14*0.1 update(table8) table8 """ Explanation: Exercise: M&M's are small candy-coated chocolates that come in a variety of colors. Mars, Inc., which makes M&M's, changes the mixture of colors from time to time. In 1995, they introduced blue M&M's. In 1994, the color mix in a bag of plain M&M's was 30\% Brown, 20\% Yellow, 20\% Red, 10\% Green, 10\% Orange, 10\% Tan. In 1996, it was 24\% Blue , 20\% Green, 16\% Orange, 14\% Yellow, 13\% Red, 13\% Brown. Suppose a friend of mine has two bags of M&M's, and he tells me that one is from 1994 and one from 1996. He won't tell me which is which, but he gives me one M&M from each bag. One is yellow and one is green. What is the probability that the yellow one came from the 1994 bag? Hint: The trick to this question is to define the hypotheses and the data carefully. End of explanation """
ProfessorKazarinoff/staticsite
content/code/matplotlib_plots/plot_bond_energy.ipynb
gpl-3.0
import numpy as np import matplotlib.pyplot as plt # if using a Jupyter notebook, include: %matplotlib inline """ Explanation: Atoms in solid materials like steel and aluminum are held together with chemical bonds. Atoms of solid materials are more stable when they are chemically bonded together, and it takes energy to separate atoms which are joined together with a chemical bond. The bonding energy associated with a chemical bond describes the amount of energy needed to separate two atoms from their equilibrium positions to a substantial distance apart. In this post, we'll review how to build a line plot with Python and Matplotlib that describes the bond energy compared to the separation distance of a Na<sup>+</sup> Cl<sup>-</sup> ion pair. Describe bond energy in terms of attractive and repulsive terms For an Na<sup>+</sup> Cl<sup>-</sup> pair, attractive and repulsive energies E<sub>a</sub> and E<sub>r</sub> depend on the distance $r$ between the Na<sup>+</sup>and Cl<sup>-</sup> ions. The attractive energy E<sub>a</sub> and the repulsive energy energy E<sub>r</sub> of an Na<sup>+</sup> Cl<sup>-</sup> pair depends on the inter-atomic distance, $r$ according to the following equations: $$E_a = \frac{- 1.436}{r} $$ $$ E_r = \frac{7.32 \times 10^{-6}}{r^8} $$ The total bond energy, E<sub>n</sub> is the sum of the attractive energy term E<sub>a</sub> and the repulsive energy term E<sub>r</sub>: $$ E_n = E_a + E_r $$ On a single plot, we can graph E<sub>a</sub>, E<sub>r</sub> and E<sub>n</sub> vs. $r$ using Python and Matplotlib. Our plot will conform to the following parameters: The range of $r$ values should be between 0.01 and 1.00 in increments of 0.01 The x-axis limits should be between $r$ = 0 and $r$ = 1.00 The y-axis limits should be between E = -10 and E = 10 We will also include a plot title and axis labels with units. Each of the three lines on the plot will also be incorporated in a legend. All of this can be accomplished with Python and Matplotlib. I use the Anaconda distribution of Python. See this post to see how to install Anaconda on Windows. If you use the Anaconda distribution of Python, Matplotlib is already installed. If you use Miniconda, a virtual environment or installed Python from Python.org, Matplotlib can be installed using the Anaconda Prompt or installed using a terminal and pip. The install command is below. ```text conda install matplotlib numpy ``` or text $ pip install matplotlib numpy Import NumPy and Matplotlib The first step to build our plot is to import NumPy and Matplotlib. We'll use NumPy to build an array that contains the distance values $r$ and the energies E<sub>a</sub>, E<sub>r</sub> and E<sub>n</sub>. We'll use Matplotlib to build the plot. The necessary import commands are below. I am using a Jupyter notebook to build the plot. To see how open a Jupyter notebook, see this post. If you are using a Jupyter notebook, include the line %matplotlib inline. If you build your code in a .py-file, make sure to leave this line out as %matplotlib inline is not valid Python code. End of explanation """ r = np.arange(0.01,1.01,0.01) # start, stop(exclusive), step """ Explanation: Create a NumPy array of r-values Next, we will create a NumPy array of $r$ values that starts at 0.01, ends at 1.00 and increments by 0.01. We'll create the array using NumPy's np.arange() function. The np.arange() function accepts three input arguments shown below. text np.arange(start,stop,step) Note that Python counting starts at zero and ends at n-1. So if we want the last value in the array to be 1.00, we need to set our stop value as 1.01. End of explanation """ Ea = -1.436/r # attractive energy term Er = (7.32e-6)/(r**(8)) # repulsive energy term En = Ea + Er # total energy """ Explanation: Create arrays for the attractive energy term, the repulsive energy term and the total energy Next we'll create arrays for the attractive energy term E<sub>a</sub>, the repulsive energy term E<sub>r</sub> and the total energy E<sub>n</sub>. We'll use the equations shown above and our array r to build these three arrays. Note that exponents are denoted in Python with the double asterisk symbol **, not the carrot symbol ^. End of explanation """ plt.plot(r,Ea,r,Er,r,En) plt.show() """ Explanation: Build a simple line plot Now that we have the four arrays, we can build a simple line plot with Matplotlib. Each line on the plot is created using a pair of arrays. The first line uses the pair r, Ea, the second line on the plot uses the pair r, Er, and the third line on the plot is described by the pair r, En. Matplotlib's plt.plot() method accepts pairs of arrays (or pairs of lists) as shown below. text plt.plot(x1,y1,x2,y2,x3,y3....) After the plt.plot() line, the command plt.show() shows the plot. These two lines have to be in the proper order. End of explanation """ plt.plot(r,Ea,r,Er,r,En) plt.xlim([0.00, 1.00]) plt.ylim([-10, 10]) plt.show() """ Explanation: We see a plot, but the plot looks like it only has one line on it. What's going on? We can only see one line because our y-axis has a very large range, between 0 and 1e10. If we limit the y-axis range, we'll be able to see all three line on the plot. Apply x-axis and y-axis limits Next, we'll use Matplotlib's plt.xlim() and plt.ylim() commands to apply axis limits to our plot. A list or an array needs to be passed to these two commands. The format is below. ```text plt.xlim([lower_limit, upper_limit]) plt.ylim([lower_limit, upper_limit]) ``` We'll limit our x-axis to start at 0.00 and end at 1.00. Our y-axis will be limited to between -10 and 10. The plt.plot() line needs to be before we customize the axis limits. The plt.show() line is included after we set the axis limits. End of explanation """ plt.plot(r,Ea,r,Er,r,En) plt.xlim([0.01, 1.00]) plt.ylim([-10, 10]) plt.title('Attractive, Replusive and Total energy of an Na+ Cl- ion pair') plt.xlabel('r (nm)') plt.ylabel('Energy (eV)') plt.show() """ Explanation: Great! We see a plot with three lines. We can also see the y-axis starts at -10 and ends at 10. Next we'll add axis labels and a title to our plot. Axis labels and title We can add axis labels to our plot with plt.xlabel() and plt.ylabel(). We need to pass a string enclosed in quotation marks ' ' as input arguments to these two methods. plt.title() adds a title to our plot. End of explanation """ plt.plot(r,Ea,r,Er,r,En) plt.xlim([0.01, 1.00]) plt.ylim([-10, 10]) plt.title('Attractive, Replusive and Total energy of an Na+ Cl- ion pair') plt.xlabel('r (nm)') plt.ylabel('Energy (eV)') plt.legend(['Ea','Er','En']) plt.show() """ Explanation: We see a plot with three lines that includes axis labels and a title. The final detail we'll to add to our plot is a legend. Add a legend We can add a legend to our plot with the plt.legend() command. A list of strings needs to be passed to plt.legend() in the form below. Note the parenthesis, square brackets, and quotation marks. text plt.legend(['Entry 1', 'Entry 2', 'Entry 3']) The plt.legend() line needs to be between the plt.plot() line and the plt.show() line just like our other plot customizations. End of explanation """ plt.plot(r,Ea,r,Er,r,En) plt.xlim([ 0.01, 1.00]) plt.ylim([-10, 10]) plt.title('Attractive, Replusive and Total energy of an Na+ Cl- ion pair') plt.xlabel('r (nm)') plt.ylabel('Energy (eV)') plt.legend(['Ea','Er','En']) plt.savefig('energy_vs_distance_curve.png', dpi=72) plt.show() """ Explanation: We see a plot with three lines, axis labels, a title, and a legend with three entries. There is one last step. In case we want to include the plot in a report or presentation, it would be nice to save the plot as an image file. Save the plot as an image file If we want to include the plot in a Word document or a PowerPoint presentation, we can just right-click on the plot and select copy image or save image as. We can also save our plot programmatically with the plt.savefig() method. The plt.savefig() method takes a couple arguments including the image file name and the image resolution. Matplotlib infers the filetype based on the filename. The general format is below. text plt.savefig('filename.ext', dpi=resolution) We'll save our plot as energy_vs_distance_curve.png with a resolution of 72 dpi (dots per inch). You could use a higher resolution in a printed document, such as200 dpi, but for a webpage 72 dpi is fine. The plt.savefig() line needs to be after the plt.plot() line and all the lines of customization, but before the plt.show() line. The code below builds the plot and saves the plot as a separate image file. End of explanation """
tpin3694/tpin3694.github.io
python/function_basics.ipynb
mit
def print_max(x, y): # if a is larger than b if x > y: # then print this print(x, 'is maximum') # if a is equal to b elif x == y: # print this print(x, 'is equal to', y) # otherwise else: # print this print(y, 'is maximum') """ Explanation: Title: Function Basics Slug: function_basics Summary: Function Basics Date: 2016-05-01 12:00 Category: Python Tags: Basics Authors: Chris Albon Create Function Called print_max End of explanation """ print_max(3,4) """ Explanation: Run Function With Two Arguments End of explanation """ x = 50 """ Explanation: Note: By default, variables created within functions are local to the function. But you can create a global function that IS defined outside the function. Create Variable End of explanation """ # Create function def func(): # Create a global variable called x global x # Print this print('x is', x) # Set x to 2. x = 2 # Print this print('Changed global x to', x) """ Explanation: Create Function Called Func End of explanation """ func() """ Explanation: Run func() End of explanation """ x """ Explanation: Print x End of explanation """ # Create function def say(x, times = 1, times2 = 3): print(x * times, x * times2) # Run the function say() with the default values say('!') # Run the function say() with the non-default values of 5 and 10 say('!', 5, 10) """ Explanation: Create Function Say() Displaying x with default value of 1 End of explanation """ # Create a function called total() with three parameters def total(initial=5, *numbers, **keywords): # Create a variable called count that takes it's value from initial count = initial # for each item in numbers for number in numbers: # add count to that number count += number # for each item in keywords for key in keywords: # add count to keyword's value count += keywords[key] # return counts return count # Run function total(10, 1, 2, 3, vegetables=50, fruits=100) """ Explanation: VarArgs Parameters (i.e. unlimited number of parameters) * denotes that all positonal arguments from that point to next arg are used ** dnotes that all keyword arguments from that point to the next arg are used End of explanation """
mathnathan/notebooks
.ipynb_checkpoints/semantic_similarity_with_tf_hub_universal_encoder-checkpoint.ipynb
mit
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """ Explanation: Copyright 2018 The TensorFlow Hub Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ %%capture !pip3 install seaborn """ Explanation: Universal Sentence Encoder <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/s?q=google%2Funiversal-sentence-encoder%2F4%20OR%20google%2Funiversal-sentence-encoder-large%2F5"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a> </td> </table> This notebook illustrates how to access the Universal Sentence Encoder and use it for sentence similarity and sentence classification tasks. The Universal Sentence Encoder makes getting sentence level embeddings as easy as it has historically been to lookup the embeddings for individual words. The sentence embeddings can then be trivially used to compute sentence level meaning similarity as well as to enable better performance on downstream classification tasks using less supervised training data. Setup This section sets up the environment for access to the Universal Sentence Encoder on TF Hub and provides examples of applying the encoder to words, sentences, and paragraphs. End of explanation """ #@title Load the Universal Sentence Encoder's TF Hub module from absl import logging import tensorflow as tf import tensorflow_hub as hub import matplotlib.pyplot as plt import numpy as np import os import pandas as pd import re import seaborn as sns module_url = "https://tfhub.dev/google/universal-sentence-encoder/4" #@param ["https://tfhub.dev/google/universal-sentence-encoder/4", "https://tfhub.dev/google/universal-sentence-encoder-large/5"] model = hub.load(module_url) print ("module %s loaded" % module_url) def embed(input): return model(input) model(["I love things"])[0] hub.load? #@title Compute a representation for each message, showing various lengths supported. word = "Elephant" sentence = "I am a sentence for which I would like to get its embedding." paragraph = ( "Universal Sentence Encoder embeddings also support short paragraphs. " "There is no hard limit on how long the paragraph is. Roughly, the longer " "the more 'diluted' the embedding will be.") messages = [word, sentence, paragraph] # Reduce logging output. logging.set_verbosity(logging.ERROR) message_embeddings = embed(messages) for i, message_embedding in enumerate(np.array(message_embeddings).tolist()): print("Message: {}".format(messages[i])) print("Embedding size: {}".format(len(message_embedding))) message_embedding_snippet = ", ".join( (str(x) for x in message_embedding[:3])) print("Embedding: [{}, ...]\n".format(message_embedding_snippet)) """ Explanation: More detailed information about installing Tensorflow can be found at https://www.tensorflow.org/install/. End of explanation """ def plot_similarity(labels, features, rotation): corr = np.inner(features, features) sns.set(font_scale=1.2) g = sns.heatmap( corr, xticklabels=labels, yticklabels=labels, vmin=0, vmax=1, cmap="YlOrRd") g.set_xticklabels(labels, rotation=rotation) g.set_title("Semantic Textual Similarity") def run_and_plot(messages_): message_embeddings_ = embed(messages_) plot_similarity(messages_, message_embeddings_, 90) """ Explanation: Semantic Textual Similarity Task Example The embeddings produced by the Universal Sentence Encoder are approximately normalized. The semantic similarity of two sentences can be trivially computed as the inner product of the encodings. End of explanation """ messages = [ # Smartphones "I like my phone", "My phone is not good.", "Your cellphone looks great.", # Weather "Will it snow tomorrow?", "Recently a lot of hurricanes have hit the US", "Global warming is real", # Food and health "An apple a day, keeps the doctors away", "Eating strawberries is healthy", "Is paleo better than keto?", # Asking about age "How old are you?", "How many years have you been alive?", ] run_and_plot(messages) """ Explanation: Similarity Visualized Here we show the similarity in a heat map. The final graph is a 9x9 matrix where each entry [i, j] is colored based on the inner product of the encodings for sentence i and j. End of explanation """ import pandas import scipy import math import csv sts_dataset = tf.keras.utils.get_file( fname="Stsbenchmark.tar.gz", origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz", extract=True) sts_dev = pandas.read_table( os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"), error_bad_lines=False, skip_blank_lines=True, usecols=[4, 5, 6], names=["sim", "sent_1", "sent_2"]) sts_test = pandas.read_table( os.path.join( os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"), error_bad_lines=False, quoting=csv.QUOTE_NONE, skip_blank_lines=True, usecols=[4, 5, 6], names=["sim", "sent_1", "sent_2"]) # cleanup some NaN values in sts_dev sts_dev = sts_dev[[isinstance(s, str) for s in sts_dev['sent_2']]] """ Explanation: Evaluation: STS (Semantic Textual Similarity) Benchmark The STS Benchmark provides an intristic evaluation of the degree to which similarity scores computed using sentence embeddings align with human judgements. The benchmark requires systems to return similarity scores for a diverse selection of sentence pairs. Pearson correlation is then used to evaluate the quality of the machine similarity scores against human judgements. Download data End of explanation """ sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"} def run_sts_benchmark(batch): sts_encode1 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_1'].tolist())), axis=1) sts_encode2 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_2'].tolist())), axis=1) cosine_similarities = tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1) clip_cosine_similarities = tf.clip_by_value(cosine_similarities, -1.0, 1.0) scores = 1.0 - tf.acos(clip_cosine_similarities) / math.pi """Returns the similarity scores""" return scores dev_scores = sts_data['sim'].tolist() scores = [] for batch in np.array_split(sts_data, 10): scores.extend(run_sts_benchmark(batch)) pearson_correlation = scipy.stats.pearsonr(scores, dev_scores) print('Pearson correlation coefficient = {0}\np-value = {1}'.format( pearson_correlation[0], pearson_correlation[1])) try: a /= 0 except Exception as e: err = e str(err) """ Explanation: Evaluate Sentence Embeddings End of explanation """
GoogleCloudPlatform/healthcare
datathon/nusdatathon18/tutorials/image_preprocessing.ipynb
apache-2.0
from google.colab import files from io import BytesIO # Display images. from IPython.display import display from PIL import Image, ImageEnhance """ Explanation: Copyright 2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Image Preprocessing In this tutorial, we are going to use the Pillow python lirbrary to show how to apply basic transformations on images. You can safely skip this tutorial if you are already familiar with Pillow. First of all, let's import all the libraries we need. End of explanation """ # Please assign the real file name of the image to image_name. image_name = '' uploaded_files = files.upload() size = (500, 500) # (width, height) image = Image.open(BytesIO(uploaded_files[image_name])).resize(size) display(image) """ Explanation: Next, let's upload a PNG image which we will apply all kinds of transformations on, and resize it to 500x500. End of explanation """ image = image.transpose(Image.ROTATE_90) display(image) """ Explanation: Now that we have the image uploaded, let's try rotate the image by 90 degrees cunter-clockwise. End of explanation """ image = image.transpose(Image.FLIP_LEFT_RIGHT) display(image) """ Explanation: Now let's flip the image horizontally. End of explanation """ contrast = ImageEnhance.Contrast(image) image = contrast.enhance(1.2) display(image) """ Explanation: As a next step, let's adjust the contrast of the image. The base value is 1 and here we are increasing it by 20%. End of explanation """ brightness = ImageEnhance.Brightness(image) image = brightness.enhance(1.1) display(image) sharpness = ImageEnhance.Sharpness(image) image = sharpness.enhance(1.2) display(image) """ Explanation: And brightness and sharpness. End of explanation """
MLIME/12aMostra
src/Keras Tutorial.ipynb
gpl-3.0
import util import numpy as np import keras from keras.utils import np_utils X_train, y_train, X_test, y_test = util.load_mnist_dataset() y_train_labels = np.array(util.get_label_names(y_train)) # Converte em one-hot para treino y_train = np_utils.to_categorical(y_train, 10) y_test = np_utils.to_categorical(y_test, 10) #Mostra algumas imagens examples = np.random.randint(0, X_train.shape[0] - 9, 9) image_shape = (X_train.shape[2], X_train.shape[3]) util.plot9images(X_train[examples], y_train_labels[examples], image_shape) """ Explanation: Keras Tutorial http://keras.io Esse tutorial é uma versão simplificada do tutorial disponível em: https://github.com/MLIME/Frameworks/tree/master/Keras O que é Keras? Keras is a high-level neural networks API, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research. Esse tutorial é dividido em três partes Funcionamento Básico do Keras Exemplo de Deep Feedforward Network Exemplo de Convolutional Neural Network 1. Funcionamento básico do Keras Backends Theano ou TensorFlow (CPU ou GPU) Tipos de Layers Core layers: Dense, Activation, Dropout, Flatten Convolutional layers: ConvXD, CroppingXD, UpSamplingXD Pooling Layers: MaxPoolingXD, AveragePoolingXD Custom layers can be created Funções de perda categorical_crossentropy sparse_categorical_crossentropy binary_crossentropy mean_squared_error mean_absolute_error Otimizadores SGD RMSprop Adagrad Adadelta Adam Adamax Ativações softmax elu relu tanh sigmoid hard_sigmoid linear Inicializadores Zeros RandomNormal RandomUniform TruncatedNormal VarianceScaling Orthogonal Identity lecun_uniform glorot_normal glorot_uniform he_normal he_uniform Inicialização Importamos bibliotecas e carregamos os dados End of explanation """ #Achatamos imagem em um vetor X_train = X_train.reshape(X_train.shape[0], np.prod(X_train.shape[1:])) X_test = X_test.reshape(X_test.shape[0], np.prod(X_test.shape[1:])) #Sequential é a API que permite construirmos um modelo ao adicionar incrementalmente layers from keras.models import Sequential from keras.layers import Dense, Flatten from keras.optimizers import SGD DFN = Sequential() DFN.add(Dense(128, input_shape=(28*28,), activation='relu')) DFN.add(Dense(128, activation='relu')) DFN.add(Dense(128, activation='relu')) DFN.add(Dense(10, activation='softmax')) #optim = SGD(lr=0.01 ) - pode construir o otimizador por fora para definir parametros DFN.compile(loss='categorical_crossentropy', optimizer='sgd', #ou usar os parâmetros padrão metrics=['accuracy']) DFN.fit(X_train, y_train, batch_size=32, epochs=2, validation_split=0.2, verbose=1) print('\nAccuracy: %.2f' % DFN.evaluate(X_test, y_test, verbose=1)[1]) """ Explanation: 2. Construindo DFNs com Keras Reshaping MNIST data End of explanation """ X_train = X_train.reshape(X_train.shape[0], 28, 28, 1) X_test = X_test.reshape(X_test.shape[0], 28, 28, 1) """ Explanation: 3. Construindo CNNs com Keras Reshaping MNIST data End of explanation """ from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import MaxPooling2D from keras.layers.convolutional import Conv2D CNN = Sequential() CNN.add(Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=(28, 28, 1),)) CNN.add(MaxPooling2D(pool_size=(2, 2))) CNN.add(Conv2D(32, (3, 3), padding='same', activation='relu')) CNN.add(MaxPooling2D(pool_size=(2, 2))) CNN.add(Dropout(0.25)) CNN.add(Flatten()) CNN.add(Dense(256, activation='relu')) CNN.add(Dropout(0.5)) CNN.add(Dense(10, activation='softmax')) CNN.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) CNN.fit(X_train, y_train, batch_size=32, epochs=2, validation_split=0.2, verbose=1) print('\nAccuracy: %.2f' % CNN.evaluate(X_test, y_test, verbose=1)[1]) """ Explanation: Compilando e ajustando CNN End of explanation """ cnn_pred = CNN.predict(X_test, verbose=1) dfn_pred = DFN.predict(X_test.reshape((X_test.shape[0], np.prod(X_test.shape[1:]))), verbose=1) cnn_pred = np.array(list(map(np.argmax, cnn_pred))) dfn_pred = np.array(list(map(np.argmax, dfn_pred))) y_pred = np.array(list(map(np.argmax, y_test))) util.plotconfusion(util.get_label_names(y_pred), util.get_label_names(dfn_pred)) util.plotconfusion(util.get_label_names(y_pred), util.get_label_names(cnn_pred)) """ Explanation: Comparamos resultados: End of explanation """ cnn_missed = cnn_pred != y_pred dfn_missed = dfn_pred != y_pred cnn_and_dfn_missed = np.logical_and(dfn_missed, cnn_missed) util.plot_missed_examples(X_test, y_pred, dfn_missed, dfn_pred) util.plot_missed_examples(X_test, y_pred, cnn_missed, cnn_pred) util.plot_missed_examples(X_test, y_pred, cnn_and_dfn_missed) """ Explanation: Vamos observar alguns exemplos mal classificados: End of explanation """
AstroHackWeek/AstroHackWeek2016
notebook-tutorial/notebooks/07-Some_basics.ipynb
mit
# Create a [list] days = ['Monday', # multiple lines 'Tuesday', # acceptable 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', ] # trailing comma is fine! days # Simple for-loop for day in days: print(day) # Double for-loop for day in days: for letter in day: print(letter) print(days) print(*days) # Double for-loop for day in days: for letter in day: print(letter) print() for day in days: for letter in day: print(letter.lower()) """ Explanation: Github https://github.com/jbwhit/OSCON-2015/commit/6750b962606db27f69162b802b5de4f84ac916d5 A few Python Basics End of explanation """ length_of_days = [len(day) for day in days] length_of_days letters = [letter for day in days for letter in day] print(letters) letters = [letter for day in days for letter in day] print(letters) [num for num in xrange(10) if num % 2] [num for num in xrange(10) if num % 2 else "doesn't work"] [num if num % 2 else "works" for num in xrange(10)] [num for num in xrange(10)] sorted_letters = sorted([x.lower() for x in letters]) print(sorted_letters) unique_sorted_letters = sorted(set(sorted_letters)) print("There are", len(unique_sorted_letters), "unique letters in the days of the week.") print("They are:", ''.join(unique_sorted_letters)) print("They are:", '; '.join(unique_sorted_letters)) def first_three(input_string): """Takes an input string and returns the first 3 characters.""" return input_string[:3] import numpy as np # tab np.linspace() [first_three(day) for day in days] def last_N(input_string, number=2): """Takes an input string and returns the last N characters.""" return input_string[-number:] [last_N(day, 4) for day in days if len(day) > 6] from math import pi print([str(round(pi, i)) for i in xrange(2, 9)]) list_of_lists = [[i, round(pi, i)] for i in xrange(2, 9)] print(list_of_lists) for sublist in list_of_lists: print(sublist) # Let this be a warning to you! # If you see python code like the following in your work: for x in range(len(list_of_lists)): print("Decimals:", list_of_lists[x][0], "expression:", list_of_lists[x][1]) print(list_of_lists) # Change it to look more like this: for decimal, rounded_pi in list_of_lists: print("Decimals:", decimal, "expression:", rounded_pi) # enumerate if you really need the index for index, day in enumerate(days): print(index, day) """ Explanation: List Comprehensions End of explanation """ from IPython.display import IFrame, HTML HTML('<iframe src=https://en.wikipedia.org/wiki/Hash_table width=100% height=550></iframe>') fellows = ["Jonathan", "Alice", "Bob"] universities = ["UCSD", "UCSD", "Vanderbilt"] for x, y in zip(fellows, universities): print(x, y) # Don't do this {x: y for x, y in zip(fellows, universities)} # Doesn't work like you might expect {zip(fellows, universities)} dict(zip(fellows, universities)) fellows fellow_dict = {fellow.lower(): university for fellow, university in zip(fellows, universities)} fellow_dict fellow_dict['bob'] rounded_pi = {i:round(pi, i) for i in xrange(2, 9)} rounded_pi[5] sum([i ** 2 for i in range(10)]) sum(i ** 2 for i in range(10)) huh = (i ** 2 for i in range(10)) huh.next() """ Explanation: Dictionaries Python dictionaries are awesome. They are hash tables and have a lot of neat CS properties. Learn and use them well. End of explanation """
mne-tools/mne-tools.github.io
0.13/_downloads/plot_decoding_unsupervised_spatial_filter.ipynb
bsd-3-clause
# Authors: Jean-Remi King <jeanremi.king@gmail.com> # Asish Panda <asishrocks95@gmail.com> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne.decoding import UnsupervisedSpatialFilter from sklearn.decomposition import PCA, FastICA print(__doc__) # Preprocess data data_path = sample.data_path() # Load and filter data, set up epochs raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.1, 0.3 event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4) raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.filter(1, 20) events = mne.read_events(event_fname) picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False, exclude='bads') epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False, picks=picks, baseline=None, preload=True, add_eeg_ref=False, verbose=False) X = epochs.get_data() """ Explanation: Analysis of evoked response using ICA and PCA reduction techniques This example computes PCA and ICA of evoked or epochs data. Then the PCA / ICA components, a.k.a. spatial filters, are used to transform the channel data to new sources / virtual channels. The output is visualized on the average of all the epochs. End of explanation """ pca = UnsupervisedSpatialFilter(PCA(30), average=False) pca_data = pca.fit_transform(X) ev = mne.EvokedArray(np.mean(pca_data, axis=0), mne.create_info(30, epochs.info['sfreq'], ch_types='eeg'), tmin=tmin) ev.plot(show=False, window_title="PCA") """ Explanation: Transform data with PCA computed on the average ie evoked response End of explanation """ ica = UnsupervisedSpatialFilter(FastICA(30), average=False) ica_data = ica.fit_transform(X) ev1 = mne.EvokedArray(np.mean(ica_data, axis=0), mne.create_info(30, epochs.info['sfreq'], ch_types='eeg'), tmin=tmin) ev1.plot(show=False, window_title='ICA') plt.show() """ Explanation: Transform data with ICA computed on the raw epochs (no averaging) End of explanation """
oscarmore2/deep-learning-study
intro-to-rnns/Anna_KaRNNa_Exercises.ipynb
mit
import time from collections import namedtuple import numpy as np import tensorflow as tf """ Explanation: Anna KaRNNa In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> End of explanation """ with open('jinpingmei.txt', 'r') as f: text=f.read() vocab = sorted(set(text)) print(vocab) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32) """ Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. End of explanation """ text[:100] """ Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever. End of explanation """ encoded[:100] """ Explanation: And we can see the characters encoded as integers. End of explanation """ len(vocab) """ Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from. End of explanation """ def get_batches(arr, n_seqs, n_steps): '''Create a generator that returns batches of size n_seqs x n_steps from arr. Arguments --------- arr: Array you want to make batches from n_seqs: Batch size, the number of sequences per batch n_steps: Number of sequence steps per batch ''' # Get the number of characters per batch and number of batches we can make characters_per_batch = n_seqs * n_steps n_batches = len(arr)//characters_per_batch # Keep only enough characters to make full batches arr = arr[:n_batches * characters_per_batch] # Reshape into n_seqs rows arr = arr.reshape((n_seqs, -1)) for n in range(0, arr.shape[1], n_steps): # The features x = arr[:, n:n+n_steps] # The targets, shifted by one y = np.zeros_like(x) y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0] yield x, y """ Explanation: Making training mini-batches Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this: <img src="assets/sequence_batching@1x.png" width=500px> <br> We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator. The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep. After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches. Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this: python y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0] where x is the input batch and y is the target batch. The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide. Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself. End of explanation """ batches = get_batches(encoded, 10, 50) x, y = next(batches) print('x\n', x[:10, :10]) print('\ny\n', y[:10, :10]) """ Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps. End of explanation """ def build_inputs(batch_size, num_steps): ''' Define placeholders for inputs, targets, and dropout Arguments --------- batch_size: Batch size, number of sequences per batch num_steps: Number of sequence steps in a batch ''' # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32, [batch_size * num_steps], name='inputs') targets = tf.placeholder(tf.int32, [batch_size * num_steps], name='targets') # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32, name='keep_prob') return inputs, targets, keep_prob """ Explanation: If you implemented get_batches correctly, the above output should look something like ``` x [[55 63 69 22 6 76 45 5 16 35] [ 5 69 1 5 12 52 6 5 56 52] [48 29 12 61 35 35 8 64 76 78] [12 5 24 39 45 29 12 56 5 63] [ 5 29 6 5 29 78 28 5 78 29] [ 5 13 6 5 36 69 78 35 52 12] [63 76 12 5 18 52 1 76 5 58] [34 5 73 39 6 5 12 52 36 5] [ 6 5 29 78 12 79 6 61 5 59] [ 5 78 69 29 24 5 6 52 5 63]] y [[63 69 22 6 76 45 5 16 35 35] [69 1 5 12 52 6 5 56 52 29] [29 12 61 35 35 8 64 76 78 28] [ 5 24 39 45 29 12 56 5 63 29] [29 6 5 29 78 28 5 78 29 45] [13 6 5 36 69 78 35 52 12 43] [76 12 5 18 52 1 76 5 58 52] [ 5 73 39 6 5 12 52 36 5 78] [ 5 29 78 12 79 6 61 5 59 63] [78 69 29 24 5 6 52 5 63 76]] `` although the exact numbers will be different. Check to make sure the data is shifted over one step fory`. Building the model Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network. <img src="assets/charRNN.png" width=500px> Inputs First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size. Exercise: Create the input placeholders in the function below. End of explanation """ def build_lstm(lstm_size, num_layers, batch_size, keep_prob): ''' Build LSTM cell. Arguments --------- keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability lstm_size: Size of the hidden layers in the LSTM cells num_layers: Number of LSTM layers batch_size: Batch size ''' ### Build the LSTM Cell def build_cell(lstm_size, keep_prob): # Use a basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)]) initial_state = cell.zero_state(batch_size, tf.float32) return cell, initial_state """ Explanation: LSTM Cell Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer. We first create a basic LSTM cell with python lstm = tf.contrib.rnn.BasicLSTMCell(num_units) where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with python tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this python tf.contrib.rnn.MultiRNNCell([cell]*num_layers) This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like ```python def build_cell(num_units, keep_prob): lstm = tf.contrib.rnn.BasicLSTMCell(num_units) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)]) ``` Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell. We also need to create an initial cell state of all zeros. This can be done like so python initial_state = cell.zero_state(batch_size, tf.float32) Below, we implement the build_lstm function to create these LSTM cells and the initial state. End of explanation """ def build_output(lstm_output, in_size, out_size): ''' Build a softmax layer, return the softmax output and logits. Arguments --------- lstm_output: List of output tensors from the LSTM layer in_size: Size of the input tensor, for example, size of the LSTM cells out_size: Size of this softmax layer ''' # Reshape output so it's a bunch of rows, one row for each step for each sequence. # Concatenate lstm_output over axis 1 (the columns) seq_output = tf.concat(lstm_output, axis=1) # Reshape seq_output to a 2D tensor with lstm_size columns x = tf.reshape(seq_output, [-1, in_size]) # Connect the RNN outputs to a softmax layer with tf.variable_scope('softmax'): # Create the weight and bias variables here softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(out_size)) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and sequence logits = tf.matmul(x, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters out = tf.nn.softmax(logits, name='predictions') return out, logits """ Explanation: RNN Output Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text. If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$. We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$. One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names. Exercise: Implement the output layer in the function below. End of explanation """ def build_loss(logits, targets, lstm_size, num_classes): ''' Calculate the loss from the logits and the targets. Arguments --------- logits: Logits from final fully connected layer targets: Targets for supervised learning lstm_size: Number of LSTM hidden units num_classes: Number of classes in targets ''' # One-hot encode targets and reshape to match logits, one row per sequence per step y_one_hot = tf.one_hot(targets, num_classes) y_reshaped = tf.reshape(y_one_hot, logits.getshape()) # Softmax cross entropy loss loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, label=y_reshaped) loss = tf.reduce_mean(loss) return loss """ Explanation: Training loss Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$. Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss. Exercise: Implement the loss calculation in the function below. End of explanation """ def build_optimizer(loss, learning_rate, grad_clip): ''' Build optmizer for training, using gradient clipping. Arguments: loss: Network loss learning_rate: Learning rate for optimizer ''' # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) return optimizer """ Explanation: Optimizer Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step. End of explanation """ class CharRNN: def __init__(self, num_classes, batch_size=64, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 else: batch_size, num_steps = batch_size, num_steps tf.reset_default_graph() # Build the input placeholder tensors self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps) # Build the LSTM cell cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob) ### Run the data through the RNN layers # First, one-hot encode the input tokens x_one_hot = tf.one_hot(self.inputs, num_classes) # Run each sequence step through the RNN with tf.nn.dynamic_rnn outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state) self.final_state = state # Get softmax predictions and logits self.prediction, self.logits = build_output(outputs, lstm_size, num_classes) # Loss and optimizer (with gradient clipping) self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes) self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip) """ Explanation: Build the network Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN. Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network. End of explanation """ batch_size = 100 # Sequences per batch num_steps = 100 # Number of sequence steps per batch lstm_size = 512 # Size of hidden layers in LSTMs num_layers = 2 # Number of LSTM layers learning_rate = 0.001 # Learning rate keep_prob = 0.5 # Dropout keep probability """ Explanation: Hyperparameters Here are the hyperparameters for the network. batch_size - Number of sequences running through the network in one pass. num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. lstm_size - The number of units in the hidden layers. num_layers - Number of hidden LSTM layers to use learning_rate - Learning rate for training keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this. Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from. Tips and Tricks Monitoring Validation Loss vs. Training Loss If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular: If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer) Approximate number of parameters The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are: The number of parameters in your model. This is printed when you start training. The size of your dataset. 1MB file is approximately 1 million characters. These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples: I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger. I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss. Best models strategy The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end. It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance. By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative. End of explanation """ epochs = 20 # Save every N iterations save_every_n = 200 model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps, lstm_size=lstm_size, num_layers=num_layers, learning_rate=learning_rate) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') counter = 0 for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for x, y in get_batches(encoded, batch_size, num_steps): counter += 1 start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: keep_prob, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.loss, model.final_state, model.optimizer], feed_dict=feed) end = time.time() print('Epoch: {}/{}... '.format(e+1, epochs), 'Training Step: {}... '.format(counter), 'Training loss: {:.4f}... '.format(batch_loss), '{:.4f} sec/batch'.format((end-start))) if (counter % save_every_n == 0): saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) """ Explanation: Tensor Borard Graph Time for training This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint. Here I'm saving checkpoints with the format i{iteration number}_l{# hidden layer units}.ckpt Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU. End of explanation """ tf.train.get_checkpoint_state('checkpoints') """ Explanation: Saved checkpoints Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables End of explanation """ def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): samples = [c for c in prime] model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) """ Explanation: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. End of explanation """ tf.train.latest_checkpoint('checkpoints') checkpoint = tf.train.latest_checkpoint('checkpoints') samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i600_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i1200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) """ Explanation: Here, pass in the path to a checkpoint and sample from the network. End of explanation """
mne-tools/mne-tools.github.io
0.14/_downloads/plot_covariance_whitening_dspm.ipynb
bsd-3-clause
# Author: Denis A. Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import os import os.path as op import numpy as np from scipy.misc import imread import matplotlib.pyplot as plt import mne from mne import io from mne.datasets import spm_face from mne.minimum_norm import apply_inverse, make_inverse_operator from mne.cov import compute_covariance print(__doc__) """ Explanation: Demonstrate impact of whitening on source estimates This example demonstrates the relationship between the noise covariance estimate and the MNE / dSPM source amplitudes. It computes source estimates for the SPM faces data and compares proper regularization with insufficient regularization based on the methods described in [1]. The example demonstrates that improper regularization can lead to overestimation of source amplitudes. This example makes use of the previous, non-optimized code path that was used before implementing the suggestions presented in [1]. Please do not copy the patterns presented here for your own analysis, this is example is purely illustrative. <div class="alert alert-info"><h4>Note</h4><p>This example does quite a bit of processing, so even on a fast machine it can take a couple of minutes to complete.</p></div> References .. [1] Engemann D. and Gramfort A. (2015) Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals, vol. 108, 328-342, NeuroImage. End of explanation """ data_path = spm_face.data_path() subjects_dir = data_path + '/subjects' raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds' raw = io.read_raw_ctf(raw_fname % 1) # Take first run # To save time and memory for this demo, we'll just use the first # 2.5 minutes (all we need to get 30 total events) and heavily # resample 480->60 Hz (usually you wouldn't do either of these!) raw = raw.crop(0, 150.).load_data().resample(60, npad='auto') picks = mne.pick_types(raw.info, meg=True, exclude='bads') raw.filter(1, None, n_jobs=1) events = mne.find_events(raw, stim_channel='UPPT001') event_ids = {"faces": 1, "scrambled": 2} tmin, tmax = -0.2, 0.5 baseline = None # no baseline as high-pass is applied reject = dict(mag=3e-12) # Make source space trans = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces1_3D_raw-trans.fif' src = mne.setup_source_space('spm', fname=None, spacing='oct6', subjects_dir=subjects_dir, add_dist=False) bem = data_path + '/subjects/spm/bem/spm-5120-5120-5120-bem-sol.fif' forward = mne.make_forward_solution(raw.info, trans, src, bem) forward = mne.convert_forward_solution(forward, surf_ori=True) del src # inverse parameters conditions = 'faces', 'scrambled' snr = 3.0 lambda2 = 1.0 / snr ** 2 method = 'dSPM' clim = dict(kind='value', lims=[0, 2.5, 5]) """ Explanation: Get data End of explanation """ samples_epochs = 5, 15, method = 'empirical', 'shrunk' colors = 'steelblue', 'red' evokeds = list() stcs = list() methods_ordered = list() for n_train in samples_epochs: # estimate covs based on a subset of samples # make sure we have the same number of conditions. events_ = np.concatenate([events[events[:, 2] == id_][:n_train] for id_ in [event_ids[k] for k in conditions]]) epochs_train = mne.Epochs(raw, events_, event_ids, tmin, tmax, picks=picks, baseline=baseline, preload=True, reject=reject) epochs_train.equalize_event_counts(event_ids) assert len(epochs_train) == 2 * n_train noise_covs = compute_covariance( epochs_train, method=method, tmin=None, tmax=0, # baseline only return_estimators=True) # returns list # prepare contrast evokeds = [epochs_train[k].average() for k in conditions] del epochs_train, events_ # do contrast # We skip empirical rank estimation that we introduced in response to # the findings in reference [1] to use the naive code path that # triggered the behavior described in [1]. The expected true rank is # 274 for this dataset. Please do not do this with your data but # rely on the default rank estimator that helps regularizing the # covariance. stcs.append(list()) methods_ordered.append(list()) for cov in noise_covs: inverse_operator = make_inverse_operator(evokeds[0].info, forward, cov, loose=0.2, depth=0.8, rank=274) stc_a, stc_b = (apply_inverse(e, inverse_operator, lambda2, "dSPM", pick_ori=None) for e in evokeds) stc = stc_a - stc_b methods_ordered[-1].append(cov['method']) stcs[-1].append(stc) del inverse_operator, evokeds, cov, noise_covs, stc, stc_a, stc_b del raw, forward # save some memory """ Explanation: Estimate covariances End of explanation """ fig, (axes1, axes2) = plt.subplots(2, 3, figsize=(9.5, 6)) def brain_to_mpl(brain): """convert image to be usable with matplotlib""" tmp_path = op.abspath(op.join(op.curdir, 'my_tmp')) brain.save_imageset(tmp_path, views=['ven']) im = imread(tmp_path + '_ven.png') os.remove(tmp_path + '_ven.png') return im for ni, (n_train, axes) in enumerate(zip(samples_epochs, (axes1, axes2))): # compute stc based on worst and best ax_dynamics = axes[1] for stc, ax, method, kind, color in zip(stcs[ni], axes[::2], methods_ordered[ni], ['best', 'worst'], colors): brain = stc.plot(subjects_dir=subjects_dir, hemi='both', clim=clim, initial_time=0.175) im = brain_to_mpl(brain) brain.close() del brain ax.axis('off') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.imshow(im) ax.set_title('{0} ({1} epochs)'.format(kind, n_train * 2)) # plot spatial mean stc_mean = stc.data.mean(0) ax_dynamics.plot(stc.times * 1e3, stc_mean, label='{0} ({1})'.format(method, kind), color=color) # plot spatial std stc_var = stc.data.std(0) ax_dynamics.fill_between(stc.times * 1e3, stc_mean - stc_var, stc_mean + stc_var, alpha=0.2, color=color) # signal dynamics worst and best ax_dynamics.set_title('{0} epochs'.format(n_train * 2)) ax_dynamics.set_xlabel('Time (ms)') ax_dynamics.set_ylabel('Source Activation (dSPM)') ax_dynamics.set_xlim(tmin * 1e3, tmax * 1e3) ax_dynamics.set_ylim(-3, 3) ax_dynamics.legend(loc='upper left', fontsize=10) fig.subplots_adjust(hspace=0.4, left=0.03, right=0.98, wspace=0.07) fig.canvas.draw() fig.show() """ Explanation: Show the resulting source estimates End of explanation """
NlGG/MachineLearning
.ipynb_checkpoints/nn-checkpoint.ipynb
mit
def example1(x_1, x_2): z = x_1**0.5*x_2*0.5 return z fig = pl.figure() ax = Axes3D(fig) X = np.arange(0, 1, 0.1) Y = np.arange(0, 1, 0.1) X, Y = np.meshgrid(X, Y) Z = example1(X, Y) ax.plot_surface(X, Y, Z, rstride=1, cstride=1) pl.show() """ Explanation: コブ・ダクラス型生産関数と課題文で例に出された関数を用いる。 いずれも定義域は0≤x≤1である。 <P>コブ・ダグラス型生産関数は以下の通りである。</P> <P>z = x_1**0.5*x_2*0.5</P> End of explanation """ def example2(x_1, x_2): z = (1+np.sin(4*math.pi*x_1))*x_2*1/2 return z fig = pl.figure() ax = Axes3D(fig) X = np.arange(0, 1, 0.1) Y = np.arange(0, 1, 0.1) X, Y = np.meshgrid(X, Y) Z = example2(X, Y) ax.plot_surface(X, Y, Z, rstride=1, cstride=1) ax.set_zlim(-1, 1) pl.show() """ Explanation: <P>課題の例で使われた関数は以下の通りである。</P> <P>z = (1+np.sin(4*math.pi*x_1))*x_2*1/2</P> End of explanation """ nn = NN() """ Explanation: NNのクラスはすでにNN.pyからimportしてある。 End of explanation """ x_1 = Symbol('x_1') x_2 = Symbol('x_2') f = x_1**0.5*x_2*0.5 """ Explanation: 以下に使い方を説明する。 初めに、このコブ・ダグラス型生産関数を用いる。 End of explanation """ nn.set_input_layer(2) nn.set_hidden_layer(2) nn.set_output_layer(2) """ Explanation: 入力層、中間層、出力層を作る関数を実行する。引数には層の数を用いる。 End of explanation """ nn.setup() nn.initialize() """ Explanation: <p>nn.set_hidden_layer()は同時にシグモイド関数で変換する前の中間層も作る。</p> <p>set_output_layer()は同時にシグモイド関数で変換する前の出力層、さらに教師データを入れる配列も作る。</p> nn.setup()で入力層ー中間層、中間層ー出力層間の重みを入れる配列を作成する。 nn.initialize()で重みを初期化する。重みは-1/√d ≤ w ≤ 1/√d (dは入力層及び中間層の数)の範囲で一様分布から決定される。 End of explanation """ idata = [1, 2] nn.supervised_function(f, idata) """ Explanation: nn.supervised_function(f, idata)は教師データを作成する。引数は関数とサンプルデータをとる。 End of explanation """ nn.simulate(1, 0.1) """ Explanation: nn.simulate(N, eta)は引数に更新回数と学習率をとる。普通はN=1で行うべきかもしれないが、工夫として作成してみた。N回学習した後に出力層を返す。 End of explanation """ X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) print X, Y """ Explanation: nn.calculation()は学習せずに入力層から出力層の計算を行う。nn.simulate()内にも用いられている。 次に実際に学習を行う。サンプルデータは、 End of explanation """ X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) a = np.array([]) b = np.array([]) c = np.array([]) nn = NN() nn.set_network() for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) for i in range(100): l = np.random.choice([i for i in range(len(a))]) m = nn.main(1, f, [a[l], b[l]], 0.5) for x in X: for y in Y: idata = [x, y] c = np.append(c, nn.realize(f, idata)) a b c """ Explanation: の組み合わせである。 End of explanation """ fig = pl.figure() ax = Axes3D(fig) ax.scatter(a, b, c) pl.show() """ Explanation: 例えば(0, 0)を入力すると0.52328635を返している(つまりa[0]とb[0]を入力して、c[0]の値を返している)。 ここでは交差検定は用いていない。 End of explanation """ X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) a = np.array([]) b = np.array([]) c = np.array([]) nn = NN() nn.set_network() for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) for i in range(10000): l = np.random.choice([i for i in range(len(a))]) m = nn.main(1, f, [a[l], b[l]], 0.5) for x in X: for y in Y: idata = [x, y] c = np.append(c, nn.realize(f, idata)) fig = pl.figure() ax = Axes3D(fig) ax.scatter(a, b, c) pl.show() """ Explanation: 確率的勾配降下法を100回繰り返したが見た感じから近づいている。回数を10000回に増やしてみる。 End of explanation """ x_1 = Symbol('x_1') x_2 = Symbol('x_2') f = (1+sin(4*math.pi*x_1))*x_2*1/2 X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) a = np.array([]) b = np.array([]) c = np.array([]) nn = NN() nn.set_network() for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) for i in range(1000): l = np.random.choice([i for i in range(len(a))]) m = nn.main(1, f, [a[l], b[l]], 0.5) for x in X: for y in Y: idata = [x, y] c = np.append(c, nn.realize(f, idata)) fig = pl.figure() ax = Axes3D(fig) ax.scatter(a, b, c) pl.show() """ Explanation: 見た感じ随分近づいているように見える。 同様のことを課題の例で使われた関数でも試してみる。 End of explanation """ X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) a = np.array([]) b = np.array([]) c = np.array([]) nn = NN() nn.set_network(h=5) for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) for i in range(1000): l = np.random.choice([i for i in range(len(a))]) m = nn.main(1, f, [a[l], b[l]], 0.5) for x in X: for y in Y: idata = [x, y] c = np.append(c, nn.realize(f, idata)) fig = pl.figure() ax = Axes3D(fig) ax.scatter(a, b, c) pl.show() """ Explanation: 上手く近似できないので中間層の数を変えてみる。5層にしてみる。 End of explanation """ X = np.arange(-1, 1, 0.1) Y = np.arange(-1, 1, 0.1) print X, Y """ Explanation: 目標と比べると大きく異なる。 サンプル数を、 End of explanation """ fig = pl.figure() ax = Axes3D(fig) f = (1+sin(4*math.pi*x_1))*x_2*1/2 X = np.arange(-1, 1, 0.1) Y = np.arange(-1, 1, 0.1) a = np.array([]) b = np.array([]) c = np.array([]) fig = plt.figure() nn = NN() nn.set_network() for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) c = np.append(c, nn.main2(50, f, [x, y], 0.8)) for i in range(50): l = np.random.choice([i for i in range(len(a))]) m = nn.main2(20, f, [a[l], b[l]], 0.5) c[l] = m a = np.array([]) b = np.array([]) c = np.array([]) for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) c = np.append(c, nn.realize(f, [x, y])) ax.scatter(a, b, c) ax.set_zlim(0, 1) pl.show() """ Explanation: で取り、学習の際にランダムに一個選ばれたサンプルを何十回も学習させてみた。 End of explanation """ def example2(x_1, x_2): z = (1+np.sin(4*math.pi*x_1))*x_2*1/2 return z fig = pl.figure() ax = Axes3D(fig) X = np.arange(-1, 1, 0.1) Y = np.arange(-1, 1, 0.1) X, Y = np.meshgrid(X, Y) Z = example2(X, Y) ax.plot_surface(X, Y, Z, rstride=1, cstride=1) ax.set_zlim(-1, 1) pl.show() """ Explanation: 本来ならば下のような形になるべきであるので上手くいっているとは言い難い。 End of explanation """ X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) a = np.array([]) b = np.array([]) c = np.array([]) for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) evl = np.array([]) for i in range(len(a)): nn = NN() nn.set_network() for j in range(1): l = np.random.choice([i for i in range(len(a))]) if l != i: m = nn.main(1, f, [a[l], b[l]], 0.5) idata = [a[i], b[i]] est = nn.realize(f, idata) evl = np.append(evl, math.fabs(est - nn.supervised_data)) np.average(evl) """ Explanation: 同じ方法でコブ・ダグラス型生産関数を学習させた様子をアニメーションにしてみた。この方法が何の意味を持つかは分からないが学習はまあまよくできていた。 mp4ファイルで添付した。 [考察]ここにはコードを書けなかったが、サンプル数が多くなった時は、ミニバッチ式の学習を試すべきだと思った。 最後に交差検定を行う。 初めに学習回数が極めて少ないNNである。 End of explanation """ X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) a = np.array([]) b = np.array([]) c = np.array([]) nn = NN() nn.set_network(h=7) for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) evl = np.array([]) for i in range(len(a)): for j in range(10000): nn = NN() nn.set_network() l = np.random.choice([i for i in range(len(a))]) if l != i: m = nn.main(1, f, [a[l], b[l]], 0.5) idata = [a[i], b[i]] evl = np.append(evl, math.fabs(nn.realize(f, idata) - nn.supervised_data)) evl np.average(evl) """ Explanation: 次に十分大きく(1000回に)してみる。 End of explanation """ X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) a = np.array([]) b = np.array([]) c = np.array([]) for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) evl = np.array([]) for i in range(len(a)): for j in range(100): nn = NN() nn.set_network() l = np.random.choice([i for i in range(len(a))]) if l != i: m = nn.main(1, f, [a[l], b[l]], 0.5) idata = [a[i], b[i]] est = nn.realize(f, idata) evl = np.append(evl, math.fabs(est - nn.supervised_data)) np.average(evl) X = np.arange(0, 1, 0.2) Y = np.arange(0, 1, 0.2) a = np.array([]) b = np.array([]) c = np.array([]) for x in X: for y in Y: a = np.append(a, x) b = np.append(b, y) evl = np.array([]) for i in range(len(a)): for j in range(100): nn = NN() nn.set_network(h=5) l = np.random.choice([i for i in range(len(a))]) if l != i: m = nn.main(1, f, [a[l], b[l]], 0.5) idata = [a[i], b[i]] est = nn.realize(f, idata) evl = np.append(evl, math.fabs(est - nn.supervised_data)) np.average(evl) """ Explanation: 誤差の平均であるので小さい方よい。 学習回数を大幅に増やした結果、精度が上がった。 次に回数を100回にして、中間層の数を2と5で精度を比較する。 End of explanation """
lin99/NLPTM-2016
4.Docs/word2vec.ipynb
mit
## Loading the model with `gensim` # import wrod2vec model from gensim from gensim.models.word2vec import Word2Vec # load Google News pre-trained network model = Word2Vec.load_word2vec_format('GNvectors.bin', binary=True) """ Explanation: Playing with word2vec Fabio A. González, Universidad Nacional de Colombia Google News dataset Internal Google data set with one billion words 692k vocabulary. Words with frequency below 5 were discarded. 300-dimensions representation was obtained by training a skip-gram model. Model available at https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit?usp=sharing End of explanation """ pp(model['table']) """ Explanation: Continuous representation of words End of explanation """ plt.plot(model['car'][:50], label = 'car') plt.plot(model['vehicle'][:50], label = 'vehicle') plt.legend() """ Explanation: Semantically related words have similar representations End of explanation """ pp(model.most_similar(positive=['car'])) """ Explanation: Vector representation similarity = semantic similarity End of explanation """ result = model.most_similar(negative=['man'], positive=['woman', 'king']) pp(result) """ Explanation: Word vector space encodes linguistic regularities <img width= 600 src="linguistic regularities.jpg"> Solving analogies man is to woman as king is to ?? Relationships are encoded by word vector differences: $$ f(\textrm{"woman"}) - f(\textrm{"man"}) = f(\textrm{"??"}) - f(\textrm{"king"})$$ We can add the relationship encoding vector to a the vector of king: $$ f(\textrm{"king"}) + (f(\textrm{"woman"}) - f(\textrm{"man"})) = f(\textrm{"??"}) $$ $$ f(\textrm{"king"}) + (f(\textrm{"woman"}) - f(\textrm{"man"})) = f(\textrm{"??"}) $$ End of explanation """ def plot_data(orig_data, labels): pca = PCA(n_components=2) data = pca.fit_transform(orig_data) plt.plot(data[:,0], data[:,1], '.') for i in range(len(data)): plt.annotate(labels[i], xy = data[i]) for i in range(len(data)/2): plt.annotate("", xy=data[i], xytext=data[i+len(data)/2], arrowprops=dict(arrowstyle="->", connectionstyle="arc3") ) def analogy(worda, wordb, wordc): result = model.most_similar(negative=[worda], positive=[wordb, wordc]) return result[0][0] adjectives = ['big', 'small', 'large', 'wide', 'strong'] comparatives = [analogy('good', 'better', adjective) for adjective in adjectives] pp(zip(adjectives,comparatives)) """ Explanation: Calculating comparatives from adjectives End of explanation """ labels = comparatives + adjectives data = [model[w] for w in labels] plot_data(data, labels) """ Explanation: Comparative vector End of explanation """ pp(model.most_similar(positive=['Colombia', 'currency'])) """ Explanation: Compositionality End of explanation """ model_es = Word2Vec.load_word2vec_format('eswikinews.bin', binary=True) pp(model_es.most_similar(positive=['yo_soy_betty'])) """ Explanation: Spanish WikiNews dataset Public data set with 94 million words 213k vocabulary (converted to lowercase, accents removed) 300-dimensions representation was obtained by training a cbow model End of explanation """
jshudzina/keras-tutorial
notebooks/02-Yellowstone-visitors-part1.ipynb
apache-2.0
# load and plot dataset from pandas import read_csv from pandas import datetime from matplotlib import pyplot # load dataset def parser(x): return datetime.strptime(x, '%Y-%m-%d') series = read_csv('../data/yellowstone-visitors.csv', header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser) # summarize first few rows print(series.head()) # line plot series.plot() pyplot.show() """ Explanation: LSTM Time Series Example This tutorial is based on Time Series Forecasting with the Long Short-Term Memory Network in Python by Jason Brownlee. Part 1 - Data Prep Before we get into the example, lets look at some visitor data from Yellowstone National park. End of explanation """ prev_4_years = series[-60:-12] last_year = series[12:] pred = prev_4_years.groupby(by=prev_4_years.index.month).mean() pred.plot() act = last_year.groupby(by=last_year.index.month).mean() act.plot() pyplot.show() """ Explanation: The park's recreational visits are highly seasonable with the peak season in July. The park tracks monthly averages from the last four years on it's web site. A simple approach to predict the next years visitors, is to use these averages. End of explanation """ from math import sqrt from sklearn.metrics import mean_squared_error rmse = sqrt(mean_squared_error(act, pred)) print('Test RMSE: %.3f' % rmse) """ Explanation: ## Monthly Average Accuracy Before this example uses Keras to predict visitors, we'll measure the monthly average method's root mean squared error. While the monthly averages aren't compeletly accurate, this method is very simple and explainable. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/noresm2-lm/landice.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-lm', 'landice') """ Explanation: ES-DOC CMIP6 Model Properties - Landice MIP Era: CMIP6 Institute: NCC Source ID: NORESM2-LM Topic: Landice Sub-Topics: Glaciers, Ice. Properties: 30 (21 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:24 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Grid 4. Glaciers 5. Ice 6. Ice --&gt; Mass Balance 7. Ice --&gt; Mass Balance --&gt; Basal 8. Ice --&gt; Mass Balance --&gt; Frontal 9. Ice --&gt; Dynamics 1. Key Properties Land ice key properties 1.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.ice_albedo') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "function of ice age" # "function of ice density" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Ice Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify how ice albedo is modelled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.4. Atmospheric Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the atmosphere and ice (e.g. orography, ice mass) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Oceanic Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the ocean and ice End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ice velocity" # "ice thickness" # "ice temperature" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which variables are prognostically calculated in the ice model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Software Properties Software properties of land ice code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Grid Land ice grid 3.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land ice scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.2. Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is an adative grid being used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.base_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.3. Base Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The base resolution (in metres), before any adaption End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.resolution_limit') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.4. Resolution Limit Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If an adaptive grid is being used, what is the limit of the resolution (in metres) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.projection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.5. Projection Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The projection of the land ice grid (e.g. albers_equal_area) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Glaciers Land ice glaciers 4.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of glaciers in the land ice scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of glaciers, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 4.3. Dynamic Areal Extent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does the model include a dynamic glacial extent? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Ice Ice sheet and ice shelf 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the ice sheet and ice shelf in the land ice scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.grounding_line_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "grounding line prescribed" # "flux prescribed (Schoof)" # "fixed grid size" # "moving grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 5.2. Grounding Line Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_sheet') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.3. Ice Sheet Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice sheets simulated? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_shelf') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.4. Ice Shelf Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice shelves simulated? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Ice --&gt; Mass Balance Description of the surface mass balance treatment 6.1. Surface Mass Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Ice --&gt; Mass Balance --&gt; Basal Description of basal melting 7.1. Bedrock Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over bedrock End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Ice --&gt; Mass Balance --&gt; Frontal Description of claving/melting from the ice shelf front 8.1. Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of calving from the front of the ice shelf End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Melting Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of melting from the front of the ice shelf End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Ice --&gt; Dynamics ** 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description if ice sheet and ice shelf dynamics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.approximation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SIA" # "SAA" # "full stokes" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.2. Approximation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Approximation type used in modelling ice dynamics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 9.3. Adaptive Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there an adaptive time scheme for the ice scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.4. Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep. End of explanation """
girving/tensorflow
tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb
apache-2.0
from __future__ import print_function from IPython.display import Image import base64 Image(data=base64.decodestring("iVBORw0KGgoAAAANSUhEUgAAAMYAAABFCAYAAAARv5krAAAYl0lEQVR4Ae3dV4wc1bYG4D3YYJucc8455yCSSIYrBAi4EjriAZHECyAk3rAID1gCIXGRgIvASIQr8UTmgDA5imByPpicTcYGY+yrbx+tOUWpu2e6u7qnZ7qXVFPVVbv2Xutfce+q7hlasmTJktSAXrnn8vR/3/xXmnnadg1aTfxL3/7rwfSPmT+kf/7vf098YRtK+FnaZaf/SS++OjNNathufF9caiT2v/xxqbTGki/SXyM1nODXv/r8+7Tb+r+lnxZNcEFHEG/e3LnpoINXSh/PWzxCy/F9eWjOnDlLrr/++jR16tQakgylqdOWTZOGFqX5C/5IjXNLjdt7/NTvv/+eTjnllLT//vunr776Kl100UVpueWWq8n10lOmpSmTU5o/f0Fa3DDH1ry9p0/++eefaZ999slYYPS0005LK664Yk2eJ02ekqZNnZx+XzA/LfprYgGxePHitOqqq6YZM2akyfPmzUvXXXddHceoic2EOckxDj300CzPggUL0g033NC3OKy00krDer3pppv6FgcBIjvGUkv9u5paZZVVhoHpl4Mvv/wyhfxDQ0NZ7H7EQbacPHny39Tejzj88ccfacqUKRmHEecYf0Nr8GGAQJ8gMHCMPlH0QMzmEBg4RnN4DVr3CQIDx+gTRQ/EbA6BgWM0h9egdZ8g8PeliD4RutfF/Ouvfz9OtZy8aNGiNH/+/GGWl1122XzseYuVNKtqsaI23Ghw0DYCA8doG8JqO+AUG2+8cVq4cGHaY4890vLLL5/WXXfdfI6jvPDCC3lJ8amnnkoezP3000/pl19+GThHtWpIPekYomTxFS7HnkqKjMsss0yGgFE4r62tSBFVJ02aNPyconi9V4/JwzHwT9ZNNtkkeZ6w5ZZbph133DH99ttv6ccff8zXX3nllcRRnHNfv2cNGMQWGRaOrWbUrjsGBRLAA6U4Lhoqw9h2223ztRBq6aWXzsbgvueffz4Lu9NOO2UnYTgrr7xy7tO9nOH111/Pbb744ov0ww8/jAvngAdFMvQDDjggG/0GG2yQX1GZNm1aziCCwzrrrJPl3muvvXKwePnll9M333wzHDCKWPbLMbuAkfISjnvvvXcW/emnn85lqCBqa4a65hiYR/Gk2RNGRlwm3n7ggQfmdrKD9sqJtdZaKxvCnDlz8n3Tp09PXmPYeuutc0SVNQjvnmuvvTa3efzxx9N33303PGZ5rF75DBvvqq233nrp22+/TWeddVbyikpgxCE4vQDhlQUBRfDw2esbs2fPTquvvnqviNN1PuIdJ4GErVx44YUZowsuuCB9+umn6eeff84BspmsWqljhPFDxjGGYx/lDkN33udajCoVlAjRzl4U8LjefRwnPjsXG8OJqKBd8NB1LTU5IHyCd7LJGOYXNoGjFqaGIKtrERDIDKtukfGMH/zRZa1A101+YBF44KfMYzO8VOYYjDWiukiGqc022yyXOUqdzTffPJ/z1ialeqNVxA9gi0wzlOJ5juJlR8JeddVV+ZrIKTq4ZvJp/8EHH+SU+txzz+W2SqmxVFZRplrH5DTRXmGFFdKuu+6azjjjjOzosl5g6D54CQCI4mGjhNQO5occckh2LvLTA6fqJOEnyhU6kNlkZmUuvrtNcFx77bUzhsZWXgoSsm6t4Dsa/tp2DErCmA04HAI4FLjaaqtlBhmnSKiNY4rDtHZFB6jFMMH0RVDH+nCPYxtDCFJnKkniRbDitWjTK3sykQUuMLPn3DZGX8SFnCG/fVyz5zCCBtIHTLshdzif8fERn8cKXxjCNOwCTu3Qf6yqhV4AQokiP489//zzM0DxnQYKwqAtIkko1kQzFFxvaNcJ6u3Pe+65J/cRRvDee+9lA2BInIyRff/997nNO++8k7t0vl2A6vHWynmyiPJ43WKLLbIijz/++LTddtvlTCdzwIWSg9yjxBJ0GN/DDz+c7zv77LOzbEceeWSekwVGgsOsWbNyNo0+qt7DfPvtt8/dmtvIGnPnzk3PPPPMsJ6rHrNef/BBeJA90RprrJEDcNhctMkXR/mnbccwuCjNGTbaaKMc8TBZprITxOdgOvbuKxqGz6LSJ598kseJ9Gi1CYmSv/76a3YyJZWMZJ6Ceskp8EMusihFEAyUmVaa8G2rxTNHIrd733///eH7YeaLNe5xrEzlWNF/HqQDf0Tm+GIbvYdD43MsKAIo/JDgE0G5aFfN8NaWYxiUshikqGYTTUSt0TCkjXsYNqJQQso+rgGa0vX58ccf56hQTtk+48F92rmvlnE1A0on2uKP0Yrw+Nxzzz0zn+ZhjKwRXq6vueaa2TmUiRQfS7SyNeMks9IV9vrvJOl/q622yo4Mfw5Pvm6TMclLdit6shh+YAMnq1E29tEsteUYBgMSgxa5MOAzJZcVXQs4bUR8XxhCHIwzMALCBuCcx5q0tF3u133l8XrRMchFiRYNyMxBKM/5IjZlWVzjULKwACISytIWFsi56aab5mvOKyEikmdAO/iHY+BDCRUZuoPD1e1akECyLseA7d13352DhdKak8Cmlt3U7TSl9p58FwejYK8ncAwKpDTnGDcARbWiAUjHiNEHsITSPlagpEZChcfrZzwSOfBOiQwXLuR3PjAhtwAD08iAMCO/a+5xPTIm3ALjwERf0V+c69QeT7ZujVdLDhgKBrANXAMreMESRkU7rdVPrXNtZ4xIpSLH1VdfnR3j4IMPzkbw2Wefpa+//jovo5188slZsZjArAcvFP3YY4+lSy+9NEdTdTTy0I5xHHfccfm1CH2LtuORKEqmkwVlVU+sBY+IdJRmE0zeeOONnEXuu+++7AhnnnlmWn/99XMJ5brtzTffzHMJx/o555xzkgdb0U8rRtAKrnTYqtG1Ml6teyxInHDCCdlGYByBmG2Z97ChVvFo2zEwbHCRTbqP7EDxPjN2pUBEe86AXAcsg+f10TYMSTvnRM1ulQe1wG/nHEXZZEJZUIYQ5cgWMsEgMgqclFdkdh+MbFFyuddnWMLNfTYkcuuXHlBkpFYNI3dS+mMMfCHHsZWadfUjmQVn8iLywscG21apMscQwR555JEM3KuvvpoZ5LHOmzgjAvBwzFt2/Oijj3Lm4Ayin/MU/eGHH+b2N998c/5MGSaZ44nw7OEd5Rx77LE5+1EehYXxkpes5li2K6+8Mhv8Lrvsko381ltvzcEBfvHQKh5auk9GPvHEE3NJAx+/eKL/HXbYIQcbK3nwN067xAk4s5VHdbvsx0nxrYQeKxJMZAfBA7GlRx99NC9EtCN7JY4RoPBeAHIAyrB3jpHYwqu1d02d7HpZcfqINo5dL7eJMXtxTzk2sgWFM/gcsnCakI2cFOk+523O+Qw7WaeYHYpYRp9xn4BkbPdWSfgJXYYM+ne+2xRj2sdx8EDu8rm4Ntp9pY4RSmb0CIPOAVNGoLA47yU4S2xen37ppZdy9CkLE/3lm8bJHzJbbiavt2Q9p7AkK7oyXAZOLk7gs9c4PJC0AOE8DDyrgJkaWgYQkSPYuAdpWySfteU8HhqKouYq+io6ZfGeZo7xpbT1+jt+jGULfprpq922ePHMBibwjWVq523KVrzBsIzTaMeu1DFi0HI0YyyYtAekY5MltbRyihFJiROBKIYTwMCTWJNubwdQFCXFapK9z96mtbjgs3thFKWnUgjBzNZIya5FOyUcPG36q4LwRgZ6Ix8HtBk3tirGGU0feAkslHfk5PzBh2cXSkvtWqWOOEaRGcoSHdXDMoYn1tK8yaON0ahbCWgFS/vxSnjn5F4ItLeiFAGAzCKc7MDA1OlIjc4pLFKE7FEyxb5ZPNTbtuiv2fvrtddfOFsYXcwj8d8qv/XGq3femLvvvnvOvrIYPPEjG+PDseDbDnXcMXiyiGiyyACOPvrovN95552zV3/++ef5zVveznlEo6CICvG5l/d4JSvHP+qoo7JjKDs4PkVSGPm9HSz9W5rlPEoCQYHjVFXyRGnBOcKA28VOP/qTBWX6YnS2IKB8qYL/enyGHPbKziOOOCLj6sGeslGW8L6Y4ANr2MY99fpsdL7jjmFwkSTSr6gDVCk+tmDQedcJ5LgdwaLPbu7xjJRRNlErSsiQhVHJlOEQoh182o1wRTnharwYs3itnWP9Rd/RD5mLW5yveh/YRhYMjItyBh/wjPat8tEVx6B00RKo5513XpIl7rzzzuwEourMmTOz95uIcyBfTSXYiy++mCOrSFS1klsFrNZ9eGPoJtmeyRx00EE5cpGbIi21XnbZZbkMee2117KMHIKMIVcotVb/vXoOz6I0+URoMlVFcBFE7L1+IjNYIo6v/fo+D3tC+FCR+FHuwNUCgfOtUlccI5hnJMoIBhN1sBICqMoNNaLP3pkiFGciIIBC4HaEbRWk0dyHb3Mp/EY0I6+NsytvyKxsKhpQr8ozGpm1IZ8IbV+PyllGuyh1YBXXOQEcy6R8M5eAHzuxxX3GRvbaCKJ4aRfXrjkG5jEbk00Prxi8SZTJKmc5/PDDc5v99tsvC+hBjWtqStmD0F4Ma1foMvDtfqZMUc3/lYjMSFFW3NS7JtyyoKzSiTocHoFJHMc+MlK7Mta7n9NbATJerbEYvQWIWCVitIyaXrV3nsG7H2Y2GVcbxyj6NX+waKEPmOvbfShwtjhQDDz5Ygt/uuoY+OPtnICDEMBTWsAQUu0NBBsDEgFEWOADAiDaVRERWsCq5i34IRN+TbTJgn8KwzOFuR4KDUXW7Kyik53Ep8w/+RkxWeO5S1EM5wVABguXMGp69dk1x87D0ObdL32GHI5tsDQGHtwbm/Hw4TpnKvNY5Ge0x113DEwT3tIsIdSnDIfxcxJAevCHfE9cXcmotHXfAw88kIFUdgFjLMn4HuZRuh9FExmjRCCnZxRqcPxz8ioUVk9eRhJkPAYHV8ZVFRkjjFSfAtw222yTy2OZ0iv15fHcQ4dKaMcwsBdEEL26RzaIh5+yK7LSBGPno8yOZX+vzRhfXzZ8cRrtyzzkzpr803XHwB8wTJYIRol+VY8zqMMBbP0f+cExE1qTdbU7x3jwwQdzVBYdesExKNiEWx2MfwoOAyCbJ9uRHZvUTcPmsENhGNE4HBKOHKNqZzQu3KNfX9H1nRABQZlbNkpt4SNo4DWIIesDj9qYnwki2giWqol3330348kZLPm7xvi1Pffcc7MzhA3gy/0oeIuxWtmPiWNgNCIFYwcCAa2FA1ikJZz1aeUVsBmge9TyoqGoIqKUFdEKCFXcU0/pHJizVMUnXBiBh6IicdTTzsEOnuZkDE/2rcJI4KMf/TF+0TucwDhkZ+DGL4/nGkPGV/AIC+2RvfP6ZPTI4gu5XNM/Um7RPzuIFyn1zW7wpQ9UHj+fbOHPmDlGCOGBGIeQQfwuq0jnISBQfOHft7JEHN94Q5xF6XLFFVfkyKIEGyuiGAo3r6BIx0imcM6k+6GHHspOEQbcDq+UTl4BwRu7PstUiPEJFsa9/PLL83nXg6d2xnUvoxS5L7744uGyh/wyRpRF9YwSHsHjE088kWWADQeRFThZkTgBstensZG5h4m56oEdcAp9CwTOVUlj6hgECcGBpA6XDazeiLKhVABQAhKB3cNxbEAL4KoEppm+gjf3OMafDf+UW7zeTL/ltqIiAxBMOIIxnLOHgbFsMGQ4InhE0nJfrXw2hnIRD3SFBKmYWDfqE49woFvOzZno3NxM0HDciMjBDsjEBgLTsJHYN+qjmWtj7hjBLKFFQgL7qRz14jHHHJPBcC2M3wRPVDT5ohzZRv0Z16O/sdozAKmdopUH5kftTrzJpl+lk29CcgpLw3BgpMbwwqF/S80pGJ6xO0WM+8Ybbxw2TuOEoTYakwyovB/JKdzDMVQOHvCRzXju890fL11aGhcMqqIxdwwCRkYQDZAaE7lWBhyosQEmQM439MgffDHm0Si8EcuBC0ezcQSZVKYktzFEW+3sfQ4natRvu9eMTS9F7IvHo+m/2fb6LNuCc0WsW+mzHq9j6hgE9YCHp5tkez2EAVjlMOmyUlU2Lis8ygVR0rykyoltPZCaOY9fr32Qp50X6xi7pWCGbsHBvwLgGIcddljGxvcsjOU1GseyiKjJQWydpiqNsBlei85BfhNxeJunVCl31x0jBOMAjJ9jRC3OEERDS7QMI0qQohIYgLSq7FJuMZbi9WZA7kRbvFAWx5Dyy449mjEDG/dyDPW4VSiy2iNvBcCSUdxyyy35OYHrqJUx843j8I/qQpA074BVVdR1x+AIHCIiIGewsqIuds41tSSlOxeOFHuOQ/E+2zPEuFYVKM32U3RMvGy44YbZMTg2B2+GOIXXJcjpR9lkUy/QyZ7GUU8zAD9RCiuR0oQYVv1IMAk7qFL+rjkGg7GZQPLufffdN69QKJtkCAKKjNGu1p7gMgWDYEDRpkpAmu0rnMLehie/RavcI49Sr1ZW0w6V91ac/IsxmdHPB0U5pQ+4+TExDudNUhPufnaKIn7N6m2k9h11jKLRqP+UQJb2eHh4uYjK0LW1D0MpCq0NR4g24RTR/0hCdvM6/m14FtljeTL4D/liedFeO7LYcyh7eMGDY8X16IM8Vp9kWjj2GwWG5IZb2FKVOHTMMTCvDKBgD2Z22223bNynnnpqVrZXBFxjQDZUFJiwIqKHN8qHO+64IxvN/fffn9vG/VWC0UpfeC5uZMEbg/ctM/8SzYOxZ599Nhs4ebSx0ECpcDFvMCdRggkesoQ+zaHU0N4EgAEnue2227JTON+LgaEVDFu5h+w2Wdl33GFkEUIQqYIqdYwwbJGO8q2xOydqUiTFWpJVPzsuUwhlzzFETxlGdFSCqaMB4XwvUzgKWU3AyW4uwFns4QMbilUyxbq8p/4cw3UEB8FDGQUDx/acqB8zRS2dw5qthe3VatPKucocg6JiYu3lP2nfawvekKVITzgJQLH24QTBtPZeE2D89957b27jwZ1IwIm8R2OMWHmJ+3pxTzaK8l+HyMrgTzrppMxqOIEsGoZvz0nsyWiliRMUl2G9aOk6POyLZVUvYtBpniL4wA1m9lVSW46BOQqKpTLK9FnUsxftvW4swssa4dkhCGFCMNfcp08lhM9KKc4h0obgsa8ShHb6Cv5DJnu8IwHB9TB852DkOlzIRV6kXbSVMfQj48BWdhE0TLr1Fe3zQR/+gRMK5yjuq4KjZccQ2SlYjexHmCnSkiLjtsesmlnpQ5naFo1A5GMAHoJxBI709ttv54ygntZWmWEcQMS9VQleRT9kNmfAG0P3HRPGbHnVudg4gEyJOAYiE0wikHAAcxHyxndO4KI/WHEK/Qzo7wjAXfaFNdurikaNtIERRTqmYIYdE2tGEs8hfJ8iFB/3xV67MCjG8NZbb6Unn3wyC+XfDxfnDxFp496qhK6qn5CDA5twK/fIRH5Gb0MMOhxCFgkKjOBoHqKEkmWvueaanG04iTHcP3CKQO0/e3ZhgceP2smqcKyKRuUYlEKhPDL+d5z1c4qVFTDnmBIZMwZ9DiKAzTmvCetPNFR7W7fXXt/KLddqTcyjr17bRybkEF5XiQhPHnMuDlF07MCB3I49l4EDxTrnfsFBJBxQbQSKeGoROqjdurWzIzoGJqRxS2KUf/rpp2flcRDRjRKVCdpFhCwz7rOVKE5z++235/7uuuuuXDq5P5yKEY0np8B3TKb9K1/vLTF0/7MiJtyRPYrq4fx+7R2e7vFDDzDyfx1goPwcUGMEYG/rFI3oGAYW0UUyimQIcRwGzbgpVsZAUTYE065xCtc5GUeSHTyg4kzKs/FKoSBljyhvTz6y2gseZAwlwgI+cNBGtpV9ZRj4BobjFY9O8g0bQcXWaRpxBE5hHuFnJ0XB6dOn56ge2QGDlK2dFSSG4b8kxVzEdSWGVxgYQLzrxJkIGgbTaUE73b9MZ/KNfIMOJpdcckndYZWmFAwv+wgydW/o8wsCK3xnz56dFzx8oxPGtk7QiI5h0FBaeGzRKYIpjDN2ig6lB9OiprmI60qNieIMIXvsQy7yotjH9eI+2hbPDY4bI8D+2JdnWTYY+iwDs78qaUTHEM0sI1pClAVMnqX9ImGQszB6DHoNOLzZNZlGRlEq9JNB9JOsRXvoxDGnsDTudwFUHTNmzMjDqEaU9xYvGgWiZnka0TEo16CeNyCM1SLtwmt5cNEoCOUa5xjQAIFWEGBP5rbKdTRr1qwcfGUMthXVTCt917pnRMdwE6ZiQm0JckADBMYCgWLwtXjTSeq/d5Y7ieag7wmDwMAxJowqB4JUicDAMapEc9DXhEFgcjxcM7vvR4on7bHS1q84WNkpUr/iEL+aOLRw4cIlQCmuIhUBmsjHlpQ9c7EmzjEsN1vd6DeCg8UVT+qRd7b6EQey8wMT+6El8RSu36xhIO8AgQYI9F94bADG4NIAgUDg/wHX+3lgThDIegAAAABJRU5ErkJggg==".encode('utf-8')), embed=True) """ Explanation: MNIST from scratch This notebook walks through an example of training a TensorFlow model to do digit classification using the MNIST data set. MNIST is a labeled set of images of handwritten digits. An example follows. End of explanation """ import os from six.moves.urllib.request import urlretrieve SOURCE_URL = 'https://storage.googleapis.com/cvdf-datasets/mnist/' #SOURCE_URL = 'http://yann.lecun.com/exdb/mnist/' # for those who have no access to google storage, use lecun's repo please WORK_DIRECTORY = "/tmp/mnist-data" def maybe_download(filename): """A helper to download the data files if not present.""" if not os.path.exists(WORK_DIRECTORY): os.mkdir(WORK_DIRECTORY) filepath = os.path.join(WORK_DIRECTORY, filename) if not os.path.exists(filepath): filepath, _ = urlretrieve(SOURCE_URL + filename, filepath) statinfo = os.stat(filepath) print('Successfully downloaded', filename, statinfo.st_size, 'bytes.') else: print('Already downloaded', filename) return filepath train_data_filename = maybe_download('train-images-idx3-ubyte.gz') train_labels_filename = maybe_download('train-labels-idx1-ubyte.gz') test_data_filename = maybe_download('t10k-images-idx3-ubyte.gz') test_labels_filename = maybe_download('t10k-labels-idx1-ubyte.gz') """ Explanation: We're going to be building a model that recognizes these digits as 5, 0, and 4. Imports and input data We'll proceed in steps, beginning with importing and inspecting the MNIST data. This doesn't have anything to do with TensorFlow in particular -- we're just downloading the data archive. End of explanation """ import gzip, binascii, struct, numpy import matplotlib.pyplot as plt with gzip.open(test_data_filename) as f: # Print the header fields. for field in ['magic number', 'image count', 'rows', 'columns']: # struct.unpack reads the binary data provided by f.read. # The format string '>i' decodes a big-endian integer, which # is the encoding of the data. print(field, struct.unpack('>i', f.read(4))[0]) # Read the first 28x28 set of pixel values. # Each pixel is one byte, [0, 255], a uint8. buf = f.read(28 * 28) image = numpy.frombuffer(buf, dtype=numpy.uint8) # Print the first few values of image. print('First 10 pixels:', image[:10]) """ Explanation: Working with the images Now we have the files, but the format requires a bit of pre-processing before we can work with it. The data is gzipped, requiring us to decompress it. And, each of the images are grayscale-encoded with values from [0, 255]; we'll normalize these to [-0.5, 0.5]. Let's try to unpack the data using the documented format: [offset] [type] [value] [description] 0000 32 bit integer 0x00000803(2051) magic number 0004 32 bit integer 60000 number of images 0008 32 bit integer 28 number of rows 0012 32 bit integer 28 number of columns 0016 unsigned byte ?? pixel 0017 unsigned byte ?? pixel ........ xxxx unsigned byte ?? pixel Pixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black). We'll start by reading the first image from the test data as a sanity check. End of explanation """ %matplotlib inline # We'll show the image and its pixel value histogram side-by-side. _, (ax1, ax2) = plt.subplots(1, 2) # To interpret the values as a 28x28 image, we need to reshape # the numpy array, which is one dimensional. ax1.imshow(image.reshape(28, 28), cmap=plt.cm.Greys); ax2.hist(image, bins=20, range=[0,255]); """ Explanation: The first 10 pixels are all 0 values. Not very interesting, but also unsurprising. We'd expect most of the pixel values to be the background color, 0. We could print all 28 * 28 values, but what we really need to do to make sure we're reading our data properly is look at an image. End of explanation """ # Let's convert the uint8 image to 32 bit floats and rescale # the values to be centered around 0, between [-0.5, 0.5]. # # We again plot the image and histogram to check that we # haven't mangled the data. scaled = image.astype(numpy.float32) scaled = (scaled - (255 / 2.0)) / 255 _, (ax1, ax2) = plt.subplots(1, 2) ax1.imshow(scaled.reshape(28, 28), cmap=plt.cm.Greys); ax2.hist(scaled, bins=20, range=[-0.5, 0.5]); """ Explanation: The large number of 0 values correspond to the background of the image, another large mass of value 255 is black, and a mix of grayscale transition values in between. Both the image and histogram look sensible. But, it's good practice when training image models to normalize values to be centered around 0. We'll do that next. The normalization code is fairly short, and it may be tempting to assume we haven't made mistakes, but we'll double-check by looking at the rendered input and histogram again. Malformed inputs are a surprisingly common source of errors when developing new models. End of explanation """ with gzip.open(test_labels_filename) as f: # Print the header fields. for field in ['magic number', 'label count']: print(field, struct.unpack('>i', f.read(4))[0]) print('First label:', struct.unpack('B', f.read(1))[0]) """ Explanation: Great -- we've retained the correct image data while properly rescaling to the range [-0.5, 0.5]. Reading the labels Let's next unpack the test label data. The format here is similar: a magic number followed by a count followed by the labels as uint8 values. In more detail: [offset] [type] [value] [description] 0000 32 bit integer 0x00000801(2049) magic number (MSB first) 0004 32 bit integer 10000 number of items 0008 unsigned byte ?? label 0009 unsigned byte ?? label ........ xxxx unsigned byte ?? label As with the image data, let's read the first test set value to sanity check our input path. We'll expect a 7. End of explanation """ IMAGE_SIZE = 28 PIXEL_DEPTH = 255 def extract_data(filename, num_images): """Extract the images into a 4D tensor [image index, y, x, channels]. For MNIST data, the number of channels is always 1. Values are rescaled from [0, 255] down to [-0.5, 0.5]. """ print('Extracting', filename) with gzip.open(filename) as bytestream: # Skip the magic number and dimensions; we know these values. bytestream.read(16) buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images) data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32) data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, 1) return data train_data = extract_data(train_data_filename, 60000) test_data = extract_data(test_data_filename, 10000) """ Explanation: Indeed, the first label of the test set is 7. Forming the training, testing, and validation data sets Now that we understand how to read a single element, we can read a much larger set that we'll use for training, testing, and validation. Image data The code below is a generalization of our prototyping above that reads the entire test and training data set. End of explanation """ print('Training data shape', train_data.shape) _, (ax1, ax2) = plt.subplots(1, 2) ax1.imshow(train_data[0].reshape(28, 28), cmap=plt.cm.Greys); ax2.imshow(train_data[1].reshape(28, 28), cmap=plt.cm.Greys); """ Explanation: A crucial difference here is how we reshape the array of pixel values. Instead of one image that's 28x28, we now have a set of 60,000 images, each one being 28x28. We also include a number of channels, which for grayscale images as we have here is 1. Let's make sure we've got the reshaping parameters right by inspecting the dimensions and the first two images. (Again, mangled input is a very common source of errors.) End of explanation """ NUM_LABELS = 10 def extract_labels(filename, num_images): """Extract the labels into a 1-hot matrix [image index, label index].""" print('Extracting', filename) with gzip.open(filename) as bytestream: # Skip the magic number and count; we know these values. bytestream.read(8) buf = bytestream.read(1 * num_images) labels = numpy.frombuffer(buf, dtype=numpy.uint8) # Convert to dense 1-hot representation. return (numpy.arange(NUM_LABELS) == labels[:, None]).astype(numpy.float32) train_labels = extract_labels(train_labels_filename, 60000) test_labels = extract_labels(test_labels_filename, 10000) """ Explanation: Looks good. Now we know how to index our full set of training and test images. Label data Let's move on to loading the full set of labels. As is typical in classification problems, we'll convert our input labels into a 1-hot encoding over a length 10 vector corresponding to 10 digits. The vector [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], for example, would correspond to the digit 1. End of explanation """ print('Training labels shape', train_labels.shape) print('First label vector', train_labels[0]) print('Second label vector', train_labels[1]) """ Explanation: As with our image data, we'll double-check that our 1-hot encoding of the first few values matches our expectations. End of explanation """ VALIDATION_SIZE = 5000 validation_data = train_data[:VALIDATION_SIZE, :, :, :] validation_labels = train_labels[:VALIDATION_SIZE] train_data = train_data[VALIDATION_SIZE:, :, :, :] train_labels = train_labels[VALIDATION_SIZE:] train_size = train_labels.shape[0] print('Validation shape', validation_data.shape) print('Train size', train_size) """ Explanation: The 1-hot encoding looks reasonable. Segmenting data into training, test, and validation The final step in preparing our data is to split it into three sets: training, test, and validation. This isn't the format of the original data set, so we'll take a small slice of the training data and treat that as our validation set. End of explanation """ import tensorflow as tf # We'll bundle groups of examples during training for efficiency. # This defines the size of the batch. BATCH_SIZE = 60 # We have only one channel in our grayscale images. NUM_CHANNELS = 1 # The random seed that defines initialization. SEED = 42 # This is where training samples and labels are fed to the graph. # These placeholder nodes will be fed a batch of training data at each # training step, which we'll write once we define the graph structure. train_data_node = tf.placeholder( tf.float32, shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS)) train_labels_node = tf.placeholder(tf.float32, shape=(BATCH_SIZE, NUM_LABELS)) # For the validation and test data, we'll just hold the entire dataset in # one constant node. validation_data_node = tf.constant(validation_data) test_data_node = tf.constant(test_data) # The variables below hold all the trainable weights. For each, the # parameter defines how the variables will be initialized. conv1_weights = tf.Variable( tf.truncated_normal([5, 5, NUM_CHANNELS, 32], # 5x5 filter, depth 32. stddev=0.1, seed=SEED)) conv1_biases = tf.Variable(tf.zeros([32])) conv2_weights = tf.Variable( tf.truncated_normal([5, 5, 32, 64], stddev=0.1, seed=SEED)) conv2_biases = tf.Variable(tf.constant(0.1, shape=[64])) fc1_weights = tf.Variable( # fully connected, depth 512. tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512], stddev=0.1, seed=SEED)) fc1_biases = tf.Variable(tf.constant(0.1, shape=[512])) fc2_weights = tf.Variable( tf.truncated_normal([512, NUM_LABELS], stddev=0.1, seed=SEED)) fc2_biases = tf.Variable(tf.constant(0.1, shape=[NUM_LABELS])) print('Done') """ Explanation: Defining the model Now that we've prepared our data, we're ready to define our model. The comments describe the architecture, which fairly typical of models that process image data. The raw input passes through several convolution and max pooling layers with rectified linear activations before several fully connected layers and a softmax loss for predicting the output class. During training, we use dropout. We'll separate our model definition into three steps: Defining the variables that will hold the trainable weights. Defining the basic model graph structure described above. And, Stamping out several copies of the model graph for training, testing, and validation. We'll start with the variables. End of explanation """ def model(data, train=False): """The Model definition.""" # 2D convolution, with 'SAME' padding (i.e. the output feature map has # the same size as the input). Note that {strides} is a 4D array whose # shape matches the data layout: [image index, y, x, depth]. conv = tf.nn.conv2d(data, conv1_weights, strides=[1, 1, 1, 1], padding='SAME') # Bias and rectified linear non-linearity. relu = tf.nn.relu(tf.nn.bias_add(conv, conv1_biases)) # Max pooling. The kernel size spec ksize also follows the layout of # the data. Here we have a pooling window of 2, and a stride of 2. pool = tf.nn.max_pool(relu, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') conv = tf.nn.conv2d(pool, conv2_weights, strides=[1, 1, 1, 1], padding='SAME') relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases)) pool = tf.nn.max_pool(relu, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') # Reshape the feature map cuboid into a 2D matrix to feed it to the # fully connected layers. pool_shape = pool.get_shape().as_list() reshape = tf.reshape( pool, [pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]]) # Fully connected layer. Note that the '+' operation automatically # broadcasts the biases. hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases) # Add a 50% dropout during training only. Dropout also scales # activations such that no rescaling is needed at evaluation time. if train: hidden = tf.nn.dropout(hidden, 0.5, seed=SEED) return tf.matmul(hidden, fc2_weights) + fc2_biases print('Done') """ Explanation: Now that we've defined the variables to be trained, we're ready to wire them together into a TensorFlow graph. We'll define a helper to do this, model, which will return copies of the graph suitable for training and testing. Note the train argument, which controls whether or not dropout is used in the hidden layer. (We want to use dropout only during training.) End of explanation """ # Training computation: logits + cross-entropy loss. logits = model(train_data_node, True) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2( labels=train_labels_node, logits=logits)) # L2 regularization for the fully connected parameters. regularizers = (tf.nn.l2_loss(fc1_weights) + tf.nn.l2_loss(fc1_biases) + tf.nn.l2_loss(fc2_weights) + tf.nn.l2_loss(fc2_biases)) # Add the regularization term to the loss. loss += 5e-4 * regularizers # Optimizer: set up a variable that's incremented once per batch and # controls the learning rate decay. batch = tf.Variable(0) # Decay once per epoch, using an exponential schedule starting at 0.01. learning_rate = tf.train.exponential_decay( 0.01, # Base learning rate. batch * BATCH_SIZE, # Current index into the dataset. train_size, # Decay step. 0.95, # Decay rate. staircase=True) # Use simple momentum for the optimization. optimizer = tf.train.MomentumOptimizer(learning_rate, 0.9).minimize(loss, global_step=batch) # Predictions for the minibatch, validation set and test set. train_prediction = tf.nn.softmax(logits) # We'll compute them only once in a while by calling their {eval()} method. validation_prediction = tf.nn.softmax(model(validation_data_node)) test_prediction = tf.nn.softmax(model(test_data_node)) print('Done') """ Explanation: Having defined the basic structure of the graph, we're ready to stamp out multiple copies for training, testing, and validation. Here, we'll do some customizations depending on which graph we're constructing. train_prediction holds the training graph, for which we use cross-entropy loss and weight regularization. We'll adjust the learning rate during training -- that's handled by the exponential_decay operation, which is itself an argument to the MomentumOptimizer that performs the actual training. The validation and prediction graphs are much simpler to generate -- we need only create copies of the model with the validation and test inputs and a softmax classifier as the output. End of explanation """ # Create a new interactive session that we'll use in # subsequent code cells. s = tf.InteractiveSession() # Use our newly created session as the default for # subsequent operations. s.as_default() # Initialize all the variables we defined above. tf.global_variables_initializer().run() """ Explanation: Training and visualizing results Now that we have the training, test, and validation graphs, we're ready to actually go through the training loop and periodically evaluate loss and error. All of these operations take place in the context of a session. In Python, we'd write something like: with tf.Session() as s: ...training / test / evaluation loop... But, here, we'll want to keep the session open so we can poke at values as we work out the details of training. The TensorFlow API includes a function for this, InteractiveSession. We'll start by creating a session and initializing the variables we defined above. End of explanation """ BATCH_SIZE = 60 # Grab the first BATCH_SIZE examples and labels. batch_data = train_data[:BATCH_SIZE, :, :, :] batch_labels = train_labels[:BATCH_SIZE] # This dictionary maps the batch data (as a numpy array) to the # node in the graph it should be fed to. feed_dict = {train_data_node: batch_data, train_labels_node: batch_labels} # Run the graph and fetch some of the nodes. _, l, lr, predictions = s.run( [optimizer, loss, learning_rate, train_prediction], feed_dict=feed_dict) print('Done') """ Explanation: Now we're ready to perform operations on the graph. Let's start with one round of training. We're going to organize our training steps into batches for efficiency; i.e., training using a small set of examples at each step rather than a single example. End of explanation """ print(predictions[0]) """ Explanation: Let's take a look at the predictions. How did we do? Recall that the output will be probabilities over the possible classes, so let's look at those probabilities. End of explanation """ # The highest probability in the first entry. print('First prediction', numpy.argmax(predictions[0])) # But, predictions is actually a list of BATCH_SIZE probability vectors. print(predictions.shape) # So, we'll take the highest probability for each vector. print('All predictions', numpy.argmax(predictions, 1)) """ Explanation: As expected without training, the predictions are all noise. Let's write a scoring function that picks the class with the maximum probability and compares with the example's label. We'll start by converting the probability vectors returned by the softmax into predictions we can match against the labels. End of explanation """ print('Batch labels', numpy.argmax(batch_labels, 1)) """ Explanation: Next, we can do the same thing for our labels -- using argmax to convert our 1-hot encoding into a digit class. End of explanation """ correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(batch_labels, 1)) total = predictions.shape[0] print(float(correct) / float(total)) confusions = numpy.zeros([10, 10], numpy.float32) bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(batch_labels, 1)) for predicted, actual in bundled: confusions[predicted, actual] += 1 plt.grid(False) plt.xticks(numpy.arange(NUM_LABELS)) plt.yticks(numpy.arange(NUM_LABELS)) plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest'); """ Explanation: Now we can compare the predicted and label classes to compute the error rate and confusion matrix for this batch. End of explanation """ def error_rate(predictions, labels): """Return the error rate and confusions.""" correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(labels, 1)) total = predictions.shape[0] error = 100.0 - (100 * float(correct) / float(total)) confusions = numpy.zeros([10, 10], numpy.float32) bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(labels, 1)) for predicted, actual in bundled: confusions[predicted, actual] += 1 return error, confusions print('Done') """ Explanation: Now let's wrap this up into our scoring function. End of explanation """ # Train over the first 1/4th of our training set. steps = train_size // BATCH_SIZE for step in range(steps): # Compute the offset of the current minibatch in the data. # Note that we could use better randomization across epochs. offset = (step * BATCH_SIZE) % (train_size - BATCH_SIZE) batch_data = train_data[offset:(offset + BATCH_SIZE), :, :, :] batch_labels = train_labels[offset:(offset + BATCH_SIZE)] # This dictionary maps the batch data (as a numpy array) to the # node in the graph it should be fed to. feed_dict = {train_data_node: batch_data, train_labels_node: batch_labels} # Run the graph and fetch some of the nodes. _, l, lr, predictions = s.run( [optimizer, loss, learning_rate, train_prediction], feed_dict=feed_dict) # Print out the loss periodically. if step % 100 == 0: error, _ = error_rate(predictions, batch_labels) print('Step %d of %d' % (step, steps)) print('Mini-batch loss: %.5f Error: %.5f Learning rate: %.5f' % (l, error, lr)) print('Validation error: %.1f%%' % error_rate( validation_prediction.eval(), validation_labels)[0]) """ Explanation: We'll need to train for some time to actually see useful predicted values. Let's define a loop that will go through our data. We'll print the loss and error periodically. Here, we want to iterate over the entire data set rather than just the first batch, so we'll need to slice the data to that end. (One pass through our training set will take some time on a CPU, so be patient if you are executing this notebook.) End of explanation """ test_error, confusions = error_rate(test_prediction.eval(), test_labels) print('Test error: %.1f%%' % test_error) plt.xlabel('Actual') plt.ylabel('Predicted') plt.grid(False) plt.xticks(numpy.arange(NUM_LABELS)) plt.yticks(numpy.arange(NUM_LABELS)) plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest'); for i, cas in enumerate(confusions): for j, count in enumerate(cas): if count > 0: xoff = .07 * len(str(count)) plt.text(j-xoff, i+.2, int(count), fontsize=9, color='white') """ Explanation: The error seems to have gone down. Let's evaluate the results using the test set. To help identify rare mispredictions, we'll include the raw count of each (prediction, label) pair in the confusion matrix. End of explanation """ plt.xticks(numpy.arange(NUM_LABELS)) plt.hist(numpy.argmax(test_labels, 1)); """ Explanation: We can see here that we're mostly accurate, with some errors you might expect, e.g., '9' is often confused as '4'. Let's do another sanity check to make sure this matches roughly the distribution of our test set, e.g., it seems like we have fewer '5' values. End of explanation """
mikekestemont/lot2016
Chapter 9 - Text analysis.ipynb
mit
ls data/arabian_nights """ Explanation: Chapter 9: What we have covered so far (and a bit more) In this chapter, we will work our way through a concise review of the Python functionality we have covered so far. Throughout this chapter, we will work with a interesting, yet not too large dataset, namely the well-known Arabian nights. Alf Laylah Wa Laylah, the Stories of One Thousand and One Nights is a collection of folk tales, collected over many centuries by various authors, translators, and scholars across West, Central and South Asia and North Africa. It forms a huge narrative wheel with an overarching plot, created by the frame story of Shahrazad. The stories begin with the tale of king Shahryar and his brother, who, having both been deceived by their respective Sultanas, leave their kingdom, only to return when they have found someone who — in their view — was wronged even more. On their journey the two brothers encounter a huge jinn who carries a glass box containing a beautiful young woman. The two brothers hide as quickly as they can in a tree. The jinn lays his head on the girl’s lap and as soon as he is asleep, the girl demands the two kings to make love to her or else she will wake her ‘husband’. They reluctantly give in and the brothers soon discover that the girl has already betrayed the jinn ninety-eight times before. This exemplar of lust and treachery strengthens the Sultan’s opinion that all women are wicked and not to be trusted. When king Shahryar returns home, his wrath against women has grown to an unprecedented level. To temper his anger, each night the king sleeps with a virgin only to execute her the next morning. In order to make an end to this cruelty and save womanhood from a "virgin scarcity", Sharazad offers herself as the next king’s bride. On the first night, Sharazad begins to tell the king a story, but she does not end it. The king’s curiosity to know how the story ends, prevents him from executing Shahrazad. The next night Shahrazad finishes her story, and begins a new one. The king, eager to know the ending of this tale as well, postpones her execution once more. Using this strategy for One Thousand and One Nights in a labyrinth of stories-within-stories-within-stories, Shahrazad attempts to gradually move the king’s cynical stance against women towards a politics of love and justice (see Marina Warner’s Stranger Magic (2013) in case you're interested). The first European version of the Nights was translated into French by Antoine Galland. Many translations (in different languages) followed, such as the (heavily criticized) English translation by Sir Richard Francis Burton entitled The Book of the Thousand and a Night (1885). This version is freely available from the Gutenberg project (see here), and will be the one we will explore here. Files and directories In the notebooks we use, there is a convenient way to quickly inspect the contents of a folder using the ls command. Our Arabian nights are contained under the general data folder: End of explanation """ f = open('data/arabian_nights/848.txt', 'r') text = f.read() f.close() print(text[:500]) """ Explanation: As you can see, this folder holds a number of plain text files, ending in the .txt extension. Let us open a random file: End of explanation """ with open('data/arabian_nights/848.txt', 'r') as f: text = f.read() print(text[:500]) """ Explanation: Here, we use the open() function to create a file object f, which we can use to access the actual text content of the file. Make sure that you do not pass the 'w' parameter ("write") to open(), instead of 'r' ("read"), since this would overwrite and thus erase the existing file. After assigning the string returned by f.read() to the variable text, we print the 500 first characters of text to get an impression of what it contains, using simple string indexing ([:500]). Don't forget to close the file again after you have opened or strange things could happen to your file! One little trick which is commonly used to avoid having to explicitly open and close your file is a with block (mind the indentation): End of explanation """ import os """ Explanation: This code block does exactly the same thing as the previous one but saves you some typing. In this chapter we would like to work with all the files in the arabian_nights directory. This is where loops come in handy of course, since what we really would like to do, is iterate over the contents of the directory. Accessing these contents in Python is easy, but requires importing some extra functionality. In this case, we need to import the os module, which contains all functionality related to the 'operating system' of your machine, such as directory information: End of explanation """ filenames = os.listdir('data/arabian_nights') print(len(filenames)) print(filenames[:20]) """ Explanation: Using the dot-syntax (os.xxx), we can now access all functions that come with this module, such as listdir(), which returns a list of the items which are included under a given directory End of explanation """ # your code goes here """ Explanation: The function os.listdir() returns a list of strings, representing the filenames contained under a directory. Quiz In Burton's translation some of the 1001 nights are missing. How many? Can you come up with a clever way to find out which nights are missing? Hint: a counting loop and some string casting might be useful here! End of explanation """ os.listdir('data/belgian_nights') """ Explanation: With os.listdir(), you need to make sure that you pass the correct path to an existing directory: End of explanation """ print(os.path.isdir('data/arabian_nights')) print(os.path.isdir('data/belgian_nights')) """ Explanation: It might therefore be convenient to check whether a directory actually exists in a given location: End of explanation """ os.mkdir('belgian_nights') """ Explanation: The second directory, naturally, does not exist and isdir() evaluates to False in this case. Creating a new (and thus empty) directory is also easy using os: End of explanation """ ls """ Explanation: We can see that it lives in the present working directory now, by typing ls again: End of explanation """ print(os.path.isdir('belgian_nights')) """ Explanation: Or we use Python: End of explanation """ import shutil shutil.rmtree('belgian_nights') """ Explanation: Removing directories is also easy, but PLEASE watch out, sometimes it is too easy: if you remove a wrong directory in Python, it will be gone forever... Unlike other applications, Python does not keep a copy of it in your Trash and it does not have a Ctrl-Z button. Please watch out with what you do, since with great power comes great responsiblity! Removing the entire directory which we just created can be done as follows: End of explanation """ print(os.path.isdir('belgian_nights')) """ Explanation: And lo behold: the directory has disappeared again: End of explanation """ os.rmdir('data/arabian_nights') """ Explanation: Here, we use the rmtree() command to remove the entire directory in a recursive way: even if the directory isn't empty and contains files and subfolders, we will remove all of them. The os module also comes with a rmdir() but this will not allow you to remove a directory which is not empty, as becomes clear in the OSError raised below: End of explanation """ os.mkdir('belgian_nights') f = open('belgian_nights/1001.txt', 'w') f.write('Content') f.close() print(os.path.exists('belgian_nights/1001.txt')) os.remove('belgian_nights/1001.txt') print(os.path.exists('belgian_nights/1001.txt')) """ Explanation: The folder contains things and therefore cannot be removed using this function. There are, of course, also ways to remove individual files or check whether they exist: End of explanation """ shutil.copyfile('data/arabian_nights/66.txt', 'new_66.txt') """ Explanation: Here, we created a directory, wrote a new file to it (1001.txt), and removed it again. Using os.path.exists() we monitored at which point the file existed. Finally, the shutil module also ships with a useful copyfile() function which allows you to copy files from one location to another, possibly with another name. To copy night 66 to the present directory, for instance, we could do: End of explanation """ ls """ Explanation: Indeed, we have added an exact copy of night 66 to our present working directory: End of explanation """ os.remove('new_66.txt') """ Explanation: We can safely remove it again: End of explanation """ os.path.abspath('data/arabian_nights/848.txt') """ Explanation: Paths The paths we have used so far are 'relative' paths, in the sense that they are relative to the place on our machine from which we execute our Python code. Absolute paths can also be retrieved and will differ on each computer, because they typically include user names etc: End of explanation """ filenames = os.listdir('data/arabian_nights') random_filename = filenames[9] with open(random_filename, 'r') as f: text = f.read() print(text[:500]) """ Explanation: While absolute paths are longer to type, they have the advantage that they can be used anywhere on your computer (i.e. irrespective of where you run your code from). Paths can be tricky. Suppose that we would like to open one of our filenames: End of explanation """ filenames = os.listdir('data/arabian_nights') random_filename = filenames[9] with open('data/arabian_nights/'+ random_filename, 'r') as f: text = f.read() print(text[:500]) """ Explanation: Python throws a FileNotFoundError, complaining that the file we wish to open does not exist. This situation stems from the fact that os.listdir() only returns the base name of a given file, and not an entire (absolute or relative) path to it. To properly access the file, we must therefore not forget to include the rest of the path again: End of explanation """ import glob filenames = glob.glob('data/arabian_nights/*') print(filenames[:10]) """ Explanation: Apart from os.listdir() there are a number of other common ways to obtain directory listings in Python. Using the glob module for instance, we can easily access the full relative path leading to our Arabian Nights: End of explanation """ filenames = glob.glob('data/arabian_nights/*.txt') print(filenames[:10]) """ Explanation: The asterisk (*) in the argument passed to glob.glob() is worth noting here. Just like with regular expressions, this asterisk is a sort of wildcard which will match any series of characters (i.e. the filenames under arabian_nights). When we exploit this wildcard syntax, glob.glob() offers another distinct advantage: we can use it to easily filter out filenames which we are not interested in: End of explanation """ filenames = [] for fn in os.listdir('data/arabian_nights'): if fn.endswith('.txt'): filenames.append(fn) print(filenames[:10]) """ Explanation: Interestingly, the command in this code block will only load filenames that end in ".txt". This is interesting when we would like to ignore other sorts of junk files etc. that might be present in a directory. To replicate similar behaviour with os.listdir(), we would have needed a typical for-loop, such as: End of explanation """ filenames = [fn for fn in os.listdir('data/arabian_nights') if fn.endswith('.txt')] """ Explanation: Or for you stylish coders out there, you can show off with a list comprehension: End of explanation """ filenames = glob.glob('data/arabian_nights/*.txt') fn = filenames[10] # simple string splitting: print(fn.split('/')[-1]) # using os.sep: print(fn.split(os.sep)[-1]) # using os.path: print(os.path.basename(fn)) """ Explanation: However, when using glob.glob(), you might sometimes want to be able to extract a file's base name again. There are several solutions to this: End of explanation """ for root, directory, filename in os.walk("data"): print(filename) """ Explanation: Both os.sep and os.path.basename have the advantage that they know what separator is used for paths in the operating system, so you don't need to explicitly code it like in the first solution. Separators differ between Windows (backslash) and Mac/Linux (forward slash). Finally, sometimes, you might be interested in all the subdirectories of a particular directory (and all the subdirectories of these subdirectories etc.). Parsing such deep directory structures can be tricky, especially if you do not know how deep a directory tree might run. You could of course try stacking multiple loops using os.listdir(), but a more convenient way is os.walk(): End of explanation """ help(os.walk) """ Explanation: As you can see, os.walk() allows you to efficiently loop over the entire tree. As always, don't forget that help is right around the corner in your notebooks. Using help(), you can quickly access the documentation of modules and their functions etc. (but only after you have imported the modules first!). End of explanation """ # your quiz code """ Explanation: Quiz In the next part of this chapter, we will need a way to sort our stories from the first, to the very last night. For our own convenience we will use a little hack for this. In this quiz, we would like you to create a new folder under data directory, called '1001'. You should copy all the original files from arabian_nights to this new folder, but give the files a new name, prepending zeros to filename until all nights have four digits in their name. 1001.txt stays 1001.txt, for instance, but 66.txt becomes 0066.txt and 2.txt becomes 0002.txt etc. This will make sorting the nights easier below. For this quiz you could for instance use a for loop in combination with a while loop (but don't get stuck in endless loops...) End of explanation """ for fn in sorted(os.listdir('data/1001')): print(fn) """ Explanation: Parsing files Using the code from the previous quiz, it is now trivial to sort our nights sequentially on the basis of their actual name (i.e. a string variable): End of explanation """ for fn in sorted(os.listdir('data/arabian_nights/')): print(fn) """ Explanation: Using the old filenames, this was not possible directly, because of the way Python sorts strings of unequal lengths. Note that the number in the filenames are represented as strings, which are completely different from real numeric integers, and thus will be sorted differently: End of explanation """ for fn in sorted(os.listdir('data/arabian_nights/'), key=lambda nb: int(nb[:-4])): print(fn) """ Explanation: Note: There is a more elegant, but also slightly less trivial way to achieve the correct order in this case: End of explanation """ import re def preprocess(in_str): out_str = '' for c in in_str.lower(): if c.isalpha() or c.isspace(): out_str += c whitespace = re.compile(r'\s+') out_str = whitespace.sub(' ', out_str) return out_str """ Explanation: Should you be interested: here, we pass a key argument to sort, which specifies which operations should be applied to the filenames before actually sorting them. Here, we specify a so-called lambda function to key, which is less intuitive to read, but which allow you to specify a sort of 'mini-function' in a very condensed way: this lambda function chops off the last four characters from each filename and then converts (or 'casts') the results to a new data type using int(), namely an integer (a 'whole' number, as opposed to floating point numbers). Eventually, this leads to the same order. More functions So far, we have been using pre-existing, ready-made functions from Python's standard library, or the standard set of functionality which comes with the programming language. Importantly, there are two additional ways of using functions on your code, which we will cover below: (i) you can write your own functions, and (ii) you can use functions from other, external libraries, which have been developped by so-called 'third parties'. Below, we will for instance use plotting functions from matplotlib, which is a common visualization library for Python. At this point, we have an efficient way of looping over the Arabian Nights sequentially. What we still lack, are functions to load and clean our data. As you could see above, our files still contain a lot of punctuation marks etc., which are perhaps less interesting from the point of view of textual analysis. Let us write a simple function that takes a string as input, and returns a cleaner version of it, where all characters are lowercased, and only alphabetic characters are kept: End of explanation """ old_str = 'This; is -- a very DIRTY string!' new_str = preprocess(old_str) print(new_str) """ Explanation: This code reviews some of the materials from previous chapters, including the use of a regular expression, which converts all consecutive instances of whitespace (including line breaks, for instance) to a single space. After executing the previous code block, we can now test our function: End of explanation """ with open('data/1001/0007.txt', 'r') as f: in_str = f.read() print(preprocess(in_str)) """ Explanation: We can now apply this function to the contents from a random night: End of explanation """ def tokenize(in_str): tokens = in_str.split() tokens = [t for t in tokens if t] return tokens """ Explanation: This text looks cleaner already! We can now start to extract individual tokens from the text and count them. This process is called tokenization. Here, we make the naive assumption that words are simply space-free alphabetic strings -- which is of course wrong in the case of English words like "can't". Note that for many languages there exist better tokenizers in Python (such as the ones in the Natural Language Toolkit (nltk). We suffice with a simpler approach for now: End of explanation """ with open('data/1001/0007.txt', 'r') as f: in_str = f.read() tokens = tokenize(preprocess(in_str)) print(tokens[:10]) """ Explanation: Using the list comprehension, we make sure that we do not accidentally return empty strings as a token, for instance, at the beginning of a text which starts with a newline. Remember that anything in Python with a length of 0, will evaluate to False, which explains the if t in the comprehension: empty strings will fail this condition. We can start stacking our functions now: End of explanation """ print(len(tokens)) """ Explanation: We can now start analyzing our nights. A good start would be to check the length of each night in words: End of explanation """ # your quiz code """ Explanation: Quiz Iterate over all the nights in 1001 in a sorted way. Open, preprocess and tokenize each text. Store in a list called word_counts how many words each story has. End of explanation """ import matplotlib.pyplot as plt %matplotlib inline """ Explanation: We now have a list of numbers, which we can plot over time. We will cover plotting more extensively in one of the next chapters. The things below are just a teaser. Start by importing matplotlib, which is imported as follows by convention: End of explanation """ plt.plot(word_counts) """ Explanation: The second line is needed to make sure that the plots will properly show up in our notebook. Let us start with a simple visualization: End of explanation """ plt.plot(range(0, len(word_counts)), word_counts) """ Explanation: As you can see, this simple command can be used to quickly obtain a visualization that shows interesting trends. On the y-axis, we plot absolute word counts for each of our nights. The x-axis is figured out automatically by matplotlib and adds an index on the horizontal x-axis. Implicitly, it interprets our command as follows: End of explanation """ filenames = sorted(os.listdir('data/1001')) idxs = [int(i[:-4]) for i in filenames] print(idxs[:20]) print(min(idxs)) print(max(idxs)) """ Explanation: When plt.plot receives two flat lists as arguments, it plots the first along the x-axis, and the second along the y-axis. If it only receives one list, it plots it along the y-axis and uses the range we now (redundantly) specified here for the x-axis. This is in fact a subtoptimal plot, since the index of the first data point we plot is zero, although the name of the first night is '1.txt'. Additionally, we know that there are some nights missing in our data. To set this straight, we could pass in our own x-coordinates as follows: End of explanation """ plt.plot(idxs, word_counts, color='r') plt.xlabel('Word length') plt.ylabel('# words (absolute counts)') plt.title('The Arabian Nights') plt.xlim(1, 1001) """ Explanation: We can now make our plot more truthful, and add some bells and whistles: End of explanation """ plt.plot(idxs, word_counts, color='r') plt.xlabel('Word length') plt.ylabel('# words (absolute counts)') plt.title(r'The Arabian Nights') plt.xlim(1, 1001) plt.axvline(500, color='g') """ Explanation: Quiz Using axvline() you can add vertical lines to a plot, for instance at position: End of explanation """ # quiz code goes here """ Explanation: Write code that plots the position of the missing nights using this function (and blue lines). End of explanation """ cnts = {} for word in tokens: if word in cnts: cnts[word] += 1 else: cnts[word] = 1 print(cnts) """ Explanation: Right now, we are visualizing texts, but we might also be interested in the vocabulary used in the story collection. Counting how often a word appears in a text is trivial for you right now with custom code, for instance: End of explanation """ from collections import Counter """ Explanation: One interesting item which you can use for counting in Python is the Counter object, which we can import as follows: End of explanation """ cnt = Counter(tokens) print(cnt) """ Explanation: This Counter makes it much easier to write code for counting. Below you can see how this counter automatically creates a dictionary-like structure: End of explanation """ print(cnt.most_common(25)) """ Explanation: If we would like to find which items are most frequent for instance, we could simply do: End of explanation """ cnt = Counter() cnt.update(tokens) cnt.update(tokens) print(cnt.most_common(25)) """ Explanation: We can also pass the Counter the tokens to count in multiple stages: End of explanation """ # quiz code """ Explanation: After passing our tokens twice to the counter, we see that the numbers double in size. Quiz Write code that makes a word frequency counter named vocab, which counts the cumulative frequencies of all words in the Arabian Nights. Which are the 15 most frequent words? Does that make sense? End of explanation """ freqs = [f for _, f in vocab.most_common(15)] words = [w for w, _ in vocab.most_common(15)] # note the use of underscores for 'throwaway' variables idxs = range(1, len(freqs)+1) """ Explanation: Let us now finally visualize the frequencies of the 15 most frequent items using a standard barplot in matplotlib. This can be achieved as follows. We first split out the names and frequencies, since .mostcommon(n) returns a list of tuples, and we create indices: End of explanation """ plt.barh(idxs, freqs, align='center') plt.yticks(idxs, words) plt.xlabel('Words') plt.ylabel('Cumulative absolute frequencies') """ Explanation: Next, we simply do: End of explanation """ from IPython.core.display import HTML def css_styling(): styles = open("styles/custom.css", "r").read() return HTML(styles) css_styling() """ Explanation: Et voilà! Closing Assignment In this larger assignment, you will have to perform some basic text processing on the larger set of XML-encoded files under data/TEI/french_plays. For this assignment, there are several subtasks: 1. Each of these files represent a play written by a particular author (see the &lt;author&gt; element): count how many texts were written by each author in the entire corpus. Make use of a Counter. 2. Each play has a cast list (&lt;castList&gt;), with a role-element for every character in it. In this element, the civil-attribute encodes the gender of the character (M/F, or another charatcer ). Create for each individual author a barplot using matplotlib, showing the percentage of male, female and 'other' characters as a percentage. Pick beautiful colors. 3. Difficult: The information contained in the castList is priceless, because it allows us to determine for each word in the play by whom it is uttered, since the &lt;sp&gt; tag encodes which character in the cast list is speaking at a particular time. Parse play 156.xml (L'Amour à la mode) and calculate which of the characters has the highest vocabulary richness: divide the number of unique words in the speaker's utterances by the total number of words (s)he utters. Only consider speakers that utter at least 1000 tokens in the play. Hint: If your run into encoding errors etc. when processing larger text collections, you can always use try/except constructions to catch these. Ignore the following, it's just here to make the page pretty: End of explanation """
vikasgorur/notebooks
deep-learning/3_regularization.ipynb
mit
# These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import numpy as np import tensorflow as tf from six.moves import cPickle as pickle """ Explanation: Deep Learning Assignment 3 Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model. The goal of this assignment is to explore regularization techniques. End of explanation """ pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) """ Explanation: First reload the data we generated in notmist.ipynb. End of explanation """ image_size = 28 num_labels = 10 def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0]) """ Explanation: Reformat into a shape that's more adapted to the models we're going to train: - data as a flat matrix, - labels as float 1-hot encodings. End of explanation """ batch_size = 128 graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. logits = tf.matmul(tf_train_dataset, weights) + biases beta = 0.001 loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + beta * tf.nn.l2_loss(weights) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases) """ Explanation: Problem 1 Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy. L2 for logistic model End of explanation """ num_steps = 3001 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)) """ Explanation: Actual training End of explanation """ batch_size = 128 num_hidden_nodes = 1024 g = tf.Graph() with g.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables, input layer w1 = tf.Variable( tf.truncated_normal([image_size * image_size, num_hidden_nodes])) b1 = tf.Variable(tf.zeros([num_hidden_nodes])) # Variables, output layer w2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels])) b2 = tf.Variable(tf.zeros([num_labels])) # Forward propagation # To get the prediction, apply softmax to the output of this def forward_prop(dataset, w1, b1, w2, b2): o1 = tf.matmul(dataset, w1) + b1 output_hidden = tf.nn.relu(o1) return tf.matmul(output_hidden, w2) + b2 train_output = forward_prop(tf_train_dataset, w1, b1, w2, b2) beta = 0.01 loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(train_output, tf_train_labels)) + beta * (tf.nn.l2_loss(w1) + tf.nn.l2_loss(w2)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(train_output) valid_prediction = tf.nn.softmax(forward_prop(tf_valid_dataset, w1, b1, w2, b2)) test_prediction = tf.nn.softmax(forward_prop(tf_test_dataset, w1, b1, w2, b2)) """ Explanation: Results Without L2: Validation accuracy: 79.2% Test accuracy: 86.4% With L2, β=2: Validation accuracy: 30.4% Test accuracy: 32.5% β = 0.01: Validation accuracy: 81.3% Test accuracy: 87.4% L2 for neural network model Graph: End of explanation """ num_steps = 3001 with tf.Session(graph=g) as session: tf.initialize_all_variables().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (small_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = small_dataset[offset:(offset + batch_size), :] batch_labels = small_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)) """ Explanation: Training the network: End of explanation """ # use only 4 batches small_dataset = train_dataset[0:128*4, :] small_labels = train_labels[0:128*4, :] """ Explanation: Problem 2 Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens? End of explanation """ # With support for dropout batch_size = 128 num_hidden_nodes = 1024 g = tf.Graph() with g.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables, input layer w1 = tf.Variable( tf.truncated_normal([image_size * image_size, num_hidden_nodes])) b1 = tf.Variable(tf.zeros([num_hidden_nodes])) # Variables, output layer w2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels])) b2 = tf.Variable(tf.zeros([num_labels])) # Forward propagation # To get the prediction, apply softmax to the output of this def forward_prop(dataset, w1, b1, w2, b2, dropout=False): o1 = tf.matmul(dataset, w1) + b1 output_hidden = tf.nn.relu(o1) if dropout: output_hidden = tf.nn.dropout(output_hidden, 0.5) return tf.matmul(output_hidden, w2) + b2 train_output = forward_prop(tf_train_dataset, w1, b1, w2, b2) beta = 0.01 loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(train_output, tf_train_labels)) + beta * tf.nn.l2_loss(w1) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(train_output) valid_prediction = tf.nn.softmax(forward_prop(tf_valid_dataset, w1, b1, w2, b2)) test_prediction = tf.nn.softmax(forward_prop(tf_test_dataset, w1, b1, w2, b2)) """ Explanation: Answer: The minibatch accuracy is very good but both validation and test accuracy are much lower. Minibatch accuracy: 89.8% Validation accuracy: 51.8% Test accuracy: 58.5% Problem 3 Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training. What happens to our extreme overfitting case? End of explanation """ # With support for dropout batch_size = 128 num_hidden_nodes_1 = 1024 num_hidden_nodes_2 = 300 g = tf.Graph() with g.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # transform input layer -> hidden layer 1 w1 = tf.Variable( tf.truncated_normal([image_size * image_size, num_hidden_nodes_1])) b1 = tf.Variable(tf.zeros([num_hidden_nodes_1])) # transform hidden layer 1 -> hidden layer 2 w2 = tf.Variable(tf.truncated_normal([num_hidden_nodes_1, num_hidden_nodes_2])) b2 = tf.Variable(tf.zeros([num_hidden_nodes_2])) # transform hidden layer 2 -> output layer w3 = tf.Variable(tf.truncated_normal([num_hidden_nodes_2, num_labels])) b3 = tf.Variable(tf.zeros([num_labels])) # Forward propagation # To get the prediction, apply softmax to the output of this def forward_prop(dataset, w1, b1, w2, b2, w3, b3, dropout=False): o1 = tf.nn.tanh(tf.matmul(dataset, w1) + b1) o2 = tf.nn.tanh(tf.matmul(o1, w2) + b2) if dropout: o1 = tf.nn.dropout(o1, 0.5) return tf.matmul(o2, w3) + b3 train_output = forward_prop(tf_train_dataset, w1, b1, w2, b2, w3, b3) beta = 0.005 loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(train_output, tf_train_labels)) + beta * (tf.nn.l2_loss(w1) + tf.nn.l2_loss(w2)) p = tf.Print(loss, [loss]) global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay(0.1, global_step, 500, 0.96) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(p, global_step=global_step) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(train_output) valid_prediction = tf.nn.softmax(forward_prop(tf_valid_dataset, w1, b1, w2, b2, w3, b3)) test_prediction = tf.nn.softmax(forward_prop(tf_test_dataset, w1, b1, w2, b2, w3, b3)) """ Explanation: Accuracy goes up slightly with dropout (and no regularization): Minibatch accuracy: 93.8% Validation accuracy: 54.1% Test accuracy: 61.3% With both L2 and dropout: Minibatch accuracy: 96.9% Validation accuracy: 74.8% Test accuracy: 82.0% Problem 4 Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%. One avenue you can explore is to add multiple layers. Another one is to use learning rate decay: global_step = tf.Variable(0) # count the number of steps taken. learning_rate = tf.train.exponential_decay(0.5, global_step, ...) optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) Final model First let's setup a multi-layer network. End of explanation """ num_steps = 9001 with tf.Session(graph=g) as session: tf.initialize_all_variables().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)) """ Explanation: Train the final model End of explanation """
GoogleCloudPlatform/mlops-on-gcp
immersion/kubeflow_pipelines/pipelines/labs/lab-02_vertex.ipynb
apache-2.0
from google.cloud import aiplatform REGION = 'us-central1' PROJECT_ID = !(gcloud config get-value project) PROJECT_ID = PROJECT_ID[0] # Set `PATH` to include the directory containing KFP CLI PATH=%env PATH %env PATH=/home/jupyter/.local/bin:{PATH} """ Explanation: Continuous Training with Kubeflow Pipeline and Vertex AI Learning Objectives: 1. Learn how to use KF pre-built components 1. Learn how to use KF lightweight python components 1. Learn how to build a KF pipeline with these components 1. Learn how to compile, upload, and run a KF pipeline In this lab, you will build, deploy, and run a KFP pipeline that orchestrates the Vertex AI services to train, tune, and deploy a scikit-learn model. Setup End of explanation """ !cat trainer_image_vertex/Dockerfile """ Explanation: Understanding the pipeline design The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the pipeline_vertex/pipeline.py file that we will generate below. The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables. Build the trainer image The training step in the pipeline will require a custom training container. The custom training image is defined in trainer_image/Dockerfile. End of explanation """ IMAGE_NAME='trainer_image_covertype_vertex' TAG='latest' TRAINING_CONTAINER_IMAGE_URI=f'gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}' TRAINING_CONTAINER_IMAGE_URI !gcloud builds submit --timeout 15m --tag $TRAINING_CONTAINER_IMAGE_URI trainer_image_vertex """ Explanation: Let's now build and push this trainer container to the container registry: End of explanation """ SERVING_CONTAINER_IMAGE_URI = 'us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest' """ Explanation: To match the ml framework version we use at training time while serving the model, we will have to supply the following serving container to the pipeline: End of explanation """ %%writefile ./pipeline_vertex/pipeline.py # Copyright 2021 Google LLC # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this # file except in compliance with the License. You may obtain a copy of the License at # https://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" # BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either # express or implied. See the License for the specific language governing # permissions and limitations under the License. """Kubeflow Covertype Pipeline.""" import os from kfp import dsl from kfp.components import create_component_from_func_v2 from tuning_lightweight_component import tune_hyperparameters from training_lightweight_component import train_and_deploy PIPELINE_ROOT = os.getenv('PIPELINE_ROOT') PROJECT_ID = os.getenv('PROJECT_ID') REGION = os.getenv('REGION') TRAINING_CONTAINER_IMAGE_URI = os.getenv('TRAINING_CONTAINER_IMAGE_URI') SERVING_CONTAINER_IMAGE_URI = os.getenv('SERVING_CONTAINER_IMAGE_URI') TRAINING_FILE_PATH = os.getenv('TRAINING_FILE_PATH') VALIDATION_FILE_PATH = os.getenv('VALIDATION_FILE_PATH') MAX_TRIAL_COUNT = os.getenv('MAX_TRIAL_COUNT', 5) PARALLEL_TRIAL_COUNT = os.getenv('PARALLEL_TRIAL_COUNT', 5) THRESHOLD = os.getenv('THRESHOLD', 0.6) tune_hyperparameters_component = # TODO train_and_deploy_component = # TODO @dsl.pipeline( name="covertype-kfp-pipeline", description="The pipeline training and deploying the Covertype classifier", pipeline_root=PIPELINE_ROOT, ) def covertype_train( training_container_uri: str = TRAINING_CONTAINER_IMAGE_URI, serving_container_uri: str = SERVING_CONTAINER_IMAGE_URI, training_file_path: str = TRAINING_FILE_PATH, validation_file_path: str = VALIDATION_FILE_PATH, accuracy_deployment_threshold: float = THRESHOLD, max_trial_count: int = MAX_TRIAL_COUNT, parallel_trial_count: int = PARALLEL_TRIAL_COUNT, pipeline_root: str = PIPELINE_ROOT, ): staging_bucket = f'{pipeline_root}/staging' tuning_op = # TODO accuracy = tuning_op.outputs['best_accuracy'] with dsl.Condition(accuracy >= accuracy_deployment_threshold, name="deploy_decision"): train_and_deploy_op = # TODO """ Explanation: Note: If you change the version of the training ml framework you'll have to supply a serving container with matchin version (see pre-built containers for prediction). Building and deploying the pipeline Let us write the pipeline to disk: Exercise Implement the train_and_deploy function in the pipeline_vertex/training_lightweight_component.py the tune_hyperparameters function in the pipeline_vertex/tuning_lightweight_component.py and complete the TODOs in the pipeline.py file below: End of explanation """ ARTIFACT_STORE = f'gs://{PROJECT_ID}-vertex' PIPELINE_ROOT = f'{ARTIFACT_STORE}/pipeline' DATA_ROOT = f'{ARTIFACT_STORE}/data' TRAINING_FILE_PATH = f'{DATA_ROOT}/training/dataset.csv' VALIDATION_FILE_PATH = f'{DATA_ROOT}/validation/dataset.csv' %env PIPELINE_ROOT={PIPELINE_ROOT} %env PROJECT_ID={PROJECT_ID} %env REGION={REGION} %env SERVING_CONTAINER_IMAGE_URI={SERVING_CONTAINER_IMAGE_URI} %env TRAINING_CONTAINER_IMAGE_URI={TRAINING_CONTAINER_IMAGE_URI} %env TRAINING_FILE_PATH={TRAINING_FILE_PATH} %env VALIDATION_FILE_PATH={VALIDATION_FILE_PATH} """ Explanation: Compile the pipeline Let stat by defining the environment variables that will be passed to the pipeline compiler: End of explanation """ !gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE} """ Explanation: Let us make sure that the ARTIFACT_STORE has been created, and let us create it if not: End of explanation """ PIPELINE_JSON = 'covertype_kfp_pipeline.json' """ Explanation: Note: In case the artifact store was not created and properly set before hand, you may need to run in CloudShell the following command to allow Vertex AI to access it: PROJECT_ID=$(gcloud config get-value project) PROJECT_NUMBER=$(gcloud projects list --filter="name=$PROJECT_ID" --format="value(PROJECT_NUMBER)") gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com" \ --role="roles/storage.objectAdmin" Use the CLI compiler to compile the pipeline We compile the pipeline from the Python file we generated into a JSON description using the following command: End of explanation """ # TODO """ Explanation: Exercise Compile the pipeline_vertex/pipeline.py with the dsl-compile-v2 command line: End of explanation """ !head {PIPELINE_JSON} """ Explanation: Note: You can also use the Python SDK to compile the pipeline from its python function ```python compiler.Compiler().compile( pipeline_func=covertype_train, package_path=PIPELINE_JSON, ) ``` The result is the pipeline file. End of explanation """ # TODO """ Explanation: Deploy the pipeline package Exercise Upload and run the pipeline to Vertex AI using aiplatform.PipelineJob: End of explanation """
phoebe-project/phoebe2-docs
2.0/tutorials/fti.ipynb
gpl-3.0
!pip install -I "phoebe>=2.0,<2.1" """ Explanation: Finite Time of Integration (fti) Setup Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). End of explanation """ %matplotlib inline import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary() b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01') """ Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details. End of explanation """ print(b['exptime']) """ Explanation: Relevant Parameters An 'exptime' parameter exists for each lc dataset and is set to 0.0 by default. This defines the exposure time that should be used when fti is enabled. As stated in its description, the time stamp of each datapoint is defined to be the time of mid-exposure. Note that the exptime applies to all times in the dataset - if times have different exposure-times, then they must be split into separate datasets manually. End of explanation """ b['exptime'] = 1, 'hr' """ Explanation: Let's set the exposure time to 1 hr to make the convolution obvious in our 1-day default binary. End of explanation """ print(b['fti_method']) b['fti_method'] = 'oversample' """ Explanation: An 'fti_method' parameter exists for each set of compute options and each lc dataset. By default this is set to 'none' - meaning that the exposure times are ignored during b.run_compute(). End of explanation """ print(b['fti_oversample']) """ Explanation: Once we set fti_method to be 'oversample', the corresponding 'fti_oversample' parameter(s) become visible. This option defines how many different time-points PHOEBE should sample over the width of the exposure time and then average to return a single flux point. By default this is set to 5. Note that increasing this number will result in better accuracy of the convolution caused by the exposure time - but increases the computation time essentially linearly. By setting to 5, our computation time will already be almost 5 times that when fti is disabled. End of explanation """ b.run_compute(fti_method='none', irrad_method='none', model='fti_off') b.run_compute(fti_method='oversample', irrad_method='none', model='fit_on') """ Explanation: Influence on Light Curves End of explanation """ axes, artists = b.plot(show=True) """ Explanation: The phase-smearing (convolution) caused by the exposure time is most evident in areas of the light curve with sharp derivatives, where the flux changes significantly over the course of the single exposure. Here we can see that the 1-hr exposure time significantly changes the observed shapes of ingress and egress as well as the observed depth of the eclipse. End of explanation """
nbateshaus/chem-search
inchi-split/notebooks/Layer Stats SQL.ipynb
bsd-3-clause
%sql postgresql://localhost/inchi_split \ select count(*) from chembl_export_nonstandard; """ Explanation: Our test set here includes the 1.3 million molecules from ChEMBL20 with MW < 600 that could be successfully processed by the RDKit. We use the Standard InChI that comes with ChEMBL and a non-standard InChI (options "/FixedH /SUU") that allows tautomers to be distinguished. Here's the sequence of psql commands used to generate that set: create temporary view molregno_lookup as select entity_id molregno,chembl_id from chembl_id_lookup where entity_type = 'COMPOUND'; select * into temporary table small_compounds from compound_structures join compound_properties using (molregno) where mw_freebase&lt;600; \f ' ' \a \o chembl_export.txt select chembl_id,standard_inchi,standard_inchi_key,mol_inchi(m,'/FixedH /SUU') nonstandard_inchi, mol_inchikey(m,'/FixedH /SUU'), canonical_smiles from small_compounds join rdk.mols using (molregno) join molregno_lookup using(molregno) ; End of explanation """ d = %sql \ select formula,count(chemblid) freq from chembl_export_nonstandard group by formula \ order by freq desc limit 10; d """ Explanation: Formula level grouping End of explanation """ d = %sql \ select formula,skeleton,hydrogens,count(chemblid) freq from chembl_export_nonstandard group by \ (formula,skeleton,hydrogens) \ order by freq desc limit 10; """ Explanation: grouping on the main layer End of explanation """ d[:5] tpl=d[0][:-1] print(tpl) rows = %sql \ select chemblid,smiles from chembl_export join chembl_export_nonstandard using (chemblid) where \ (formula,skeleton,hydrogens) = :tpl cids = [x for x,y in rows] ms = [Chem.MolFromSmiles(y) for x,y in rows] Draw.MolsToGridImage(ms,legends=cids) tpl=d[1][:-1] print(tpl) rows = %sql \ select chemblid,smiles from chembl_export join chembl_export_nonstandard using (chemblid) where \ (formula,skeleton,hydrogens) = :tpl cids = [x for x,y in rows] ms = [Chem.MolFromSmiles(y) for x,y in rows] Draw.MolsToGridImage(ms,legends=cids) tpl=d[3][:-1] print(tpl) rows = %sql \ select chemblid,smiles from chembl_export join chembl_export_nonstandard using (chemblid) where \ (formula,skeleton,hydrogens) = :tpl cids = [x for x,y in rows] ms = [Chem.MolFromSmiles(y) for x,y in rows] Draw.MolsToGridImage(ms,legends=cids) """ Explanation: Look at a few of the common main layer groups End of explanation """ d = %sql \ select formula,skeleton,hydrogens,charge,protonation,count(chemblid) freq from chembl_export_nonstandard group by \ (formula,skeleton,hydrogens,charge,protonation) \ order by freq desc limit 10; d[:5] """ Explanation: Charges End of explanation """ d = %sql \ select formula,skeleton,hydrogens,charge,protonation,stereo_bond,stereo_tet,stereo_m,stereo_s,count(chemblid) freq from chembl_export_nonstandard group by \ (formula,skeleton,hydrogens,charge,protonation,stereo_bond,stereo_tet,stereo_m,stereo_s) \ order by freq desc limit 10; d[:5] tpl=d[0][:-1] tpl = tuple(x if x is not None else '' for x in tpl) print(tpl) rows = %sql \ select chemblid,smiles from chembl_export join chembl_export_nonstandard using (chemblid) where \ (formula,skeleton,hydrogens,\ coalesce(charge,''),coalesce(protonation,''),coalesce(stereo_bond,''),\ coalesce(stereo_tet,''),coalesce(stereo_m,''),coalesce(stereo_s,'')) = :tpl cids = [x for x,y in rows] ms = [Chem.MolFromSmiles(y) for x,y in rows] Draw.MolsToGridImage(ms,legends=cids) tpl=d[3][:-1] tpl = tuple(x if x is not None else '' for x in tpl) print(tpl) rows = %sql \ select chemblid,smiles from chembl_export join chembl_export_nonstandard using (chemblid) where \ (formula,skeleton,hydrogens,\ coalesce(charge,''),coalesce(protonation,''),coalesce(stereo_bond,''),\ coalesce(stereo_tet,''),coalesce(stereo_m,''),coalesce(stereo_s,'')) = :tpl cids = [x for x,y in rows] ms = [Chem.MolFromSmiles(y) for x,y in rows] Draw.MolsToGridImage(ms,legends=cids) """ Explanation: We saw those already Stereo grouping End of explanation """ d = %sql \ select formula,skeleton,hydrogens,charge,protonation,isotope,count(chemblid) freq \ from chembl_export_nonstandard \ group by \ (formula,skeleton,hydrogens,charge,protonation,isotope) \ order by freq desc limit 10; d[:5] """ Explanation: Isotopes End of explanation """ d = %sql \ select formula,skeleton,hydrogens,charge,protonation,isotope,count(chemblid) freq \ from chembl_export_nonstandard where isotope is not null\ group by \ (formula,skeleton,hydrogens,charge,protonation,isotope) \ order by freq desc limit 10; d[:5] tpl=d[0][:-1] tpl = tuple(x if x is not None else '' for x in tpl) print(tpl) rows = %sql \ select chemblid,smiles from chembl_export join chembl_export_nonstandard using (chemblid) where \ (formula,skeleton,hydrogens,\ coalesce(charge,''),coalesce(protonation,''),isotope) = :tpl cids = [x for x,y in rows] ms = [Chem.MolFromSmiles(y) for x,y in rows] Draw.MolsToGridImage(ms,legends=cids) """ Explanation: Those we've seen before at the skeleton grouping level. Let's see some that actually include isotope info: End of explanation """ ttpl = tpl[:-1] print(ttpl) rows = %sql \ select chemblid,smiles from chembl_export join chembl_export_nonstandard using (chemblid) where \ (formula,skeleton,hydrogens,\ coalesce(charge,''),coalesce(protonation,'')) = :ttpl cids = [x for x,y in rows] ms = [Chem.MolFromSmiles(y) for x,y in rows] Draw.MolsToGridImage(ms,legends=cids) """ Explanation: This is a good one to spend a bit of time with. Let's look at the other members of that family when we ignore isotopes (and just look at the main): End of explanation """ rows = %sql \ select chemblid,smiles from chembl_export join chembl_export_nonstandard using (chemblid) where \ isotope_stereo_tet is not null and stereo_tet!=isotope_stereo_tet cids = [x for x,y in rows] ms = [Chem.MolFromSmiles(y) for x,y in rows] len(rows) Draw.MolsToGridImage(ms[:6],legends=cids) """ Explanation: Find cases where the isotope causes the stereochemistry tetrahedral stereochem End of explanation """ rows = %sql \ select chemblid,smiles from chembl_export join chembl_export_nonstandard using (chemblid) where \ isotope_stereo_tet is not null and position('?' in isotope_stereo_tet)<=0 and stereo_tet!=isotope_stereo_tet cids = [x for x,y in rows] ms = [Chem.MolFromSmiles(y) for x,y in rows] len(rows) Draw.MolsToGridImage(ms,legends=cids,subImgSize=(300,300)) smis[tItems[-2][1]] """ Explanation: Most of those have the labelled atom involved in an unknown stereocenter (the second is the exception), see if we can find more of those: End of explanation """
xiaoxiaoyao/MyApp
PythonApplication1/deeplearning/examples/gan_pytorch.ipynb
unlicense
# Generative Adversarial Networks (GAN) example in PyTorch. # See related blog post at https://medium.com/@devnag/generative-adversarial-networks-gans-in-50-lines-of-code-pytorch-e81b79659e3f#.sch4xgsa9 import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.autograd import Variable # Data params data_mean = 4 data_stddev = 1.25 # Model params g_input_size = 1 # Random noise dimension coming into generator, per output vector g_hidden_size = 50 # Generator complexity g_output_size = 1 # size of generated output vector d_input_size = 100 # Minibatch size - cardinality of distributions d_hidden_size = 50 # Discriminator complexity d_output_size = 1 # Single dimension for 'real' vs. 'fake' minibatch_size = d_input_size d_learning_rate = 2e-4 # 2e-4 g_learning_rate = 2e-4 optim_betas = (0.9, 0.999) num_epochs = 33300 print_interval = 333 d_steps = 1 # 'k' steps in the original GAN paper. Can put the discriminator on higher training freq than generator g_steps = 1 # ### Uncomment only one of these #(name, preprocess, d_input_func) = ("Raw data", lambda data: data, lambda x: x) (name, preprocess, d_input_func) = ("Data and variances", lambda data: decorate_with_diffs(data, 2.0), lambda x: x * 2) print("Using data [%s]" % (name)) """ Explanation: 用不到 50 行代码训练 GAN(基于 PyTorch 通常,我们会用下面这个例子来说明 GAN 的原理:将警察视为判别器,制造假币的犯罪分子视为生成器。一开始,犯罪分子会首先向警察展示一张假币。警察识别出该假币,并向犯罪分子反馈哪些地方是假的。接着,根据警察的反馈,犯罪分子改进工艺,制作一张更逼真的假币给警方检查。这时警方再反馈,犯罪分子再改进工艺。不断重复这一过程,直到警察识别不出真假,那么模型就训练成功了。 本文作者为前谷歌高级工程师、AI 初创公司 Wavefront 创始人兼 CTO Dev Nag,介绍了他是如何用不到五十行代码,在 PyTorch 平台上完成对 GAN 的训练。 什么是 GAN? 在进入技术层面之前,为照顾新入门的开发者,先来介绍下什么是 GAN。 2014 年,Ian Goodfellow 和他在蒙特利尔大学的同事发表了一篇震撼学界的论文。没错,我说的就是《Generative Adversarial Nets》,这标志着生成对抗网络(GAN)的诞生,而这是通过对计算图和博弈论的创新性结合。他们的研究展示,给定充分的建模能力,两个博弈模型能够通过简单的反向传播(backpropagation)来协同训练。 这两个模型的角色定位十分鲜明。给定真实数据集 R,G 是生成器(generator),它的任务是生成能以假乱真的假数据;而 D 是判别器 (discriminator),它从真实数据集或者 G 那里获取数据, 然后做出判别真假的标记。Ian Goodfellow 的比喻是,G 就像一个赝品作坊,想要让做出来的东西尽可能接近真品,蒙混过关。而 D 就是文物鉴定专家,要能区分出真品和高仿(但在这个例子中,造假者 G 看不到原始数据,而只有 D 的鉴定结果——前者是在盲干)。 理想情况下,D 和 G 都会随着不断训练,做得越来越好——直到 G 基本上成为了一个“赝品制造大师”,而 D 因无法正确区分两种数据分布输给 G。 实践中,Ian Goodfellow 展示的这项技术在本质上是:G 能够对原始数据集进行一种无监督学习,找到以更低维度的方式(lower-dimensional manner)来表示数据的某种方法。而无监督学习之所以重要,就好像 Yann LeCun 的那句话:“无监督学习是蛋糕的糕体”。这句话中的蛋糕,指的是无数学者、开发者苦苦追寻的“真正的 AI”。 开始之前,我们需要导入各种包,并且初始化变量 End of explanation """ # ##### DATA: Target data and generator input data def get_distribution_sampler(mu, sigma): return lambda n: torch.Tensor(np.random.normal(mu, sigma, (1, n))) # Gaussian """ Explanation: 用 PyTorch 训练 GAN Dev Nag:在表面上,GAN 这门如此强大、复杂的技术,看起来需要编写天量的代码来执行,但事实未必如此。我们使用 PyTorch,能够在 50 行代码以内创建出简单的 GAN 模型。这之中,其实只有五个部分需要考虑: R:原始、真实数据集 I:作为熵的一项来源,进入生成器的随机噪音 G:生成器,试图模仿原始数据 D:判别器,试图区别 G 的生成数据和 R 我们教 G 糊弄 D、教 D 当心 G 的“训练”环。 1.) R:在我们的例子里,从最简单的 R 着手——贝尔曲线(bell curve)。它把平均数(mean)和标准差(standard deviation)作为输入,然后输出能提供样本数据正确图形(从 Gaussian 用这些参数获得 )的函数。在我们的代码例子中,我们使用 4 的平均数和 1.25 的标准差。 End of explanation """ def get_generator_input_sampler(): return lambda m, n: torch.rand(m, n) # Uniform-dist data into generator, _NOT_ Gaussian """ Explanation: 2.) I:生成器的输入是随机的,为提高点难度,我们使用均匀分布(uniform distribution )而非标准分布。这意味着,我们的 Model G 不能简单地改变输入(放大/缩小、平移)来复制 R,而需要用非线性的方式来改造数据。 End of explanation """ # ##### MODELS: Generator model and discriminator model class Generator(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(Generator, self).__init__() self.map1 = nn.Linear(input_size, hidden_size) self.map2 = nn.Linear(hidden_size, hidden_size) self.map3 = nn.Linear(hidden_size, output_size) def forward(self, x): x = F.elu(self.map1(x)) x = F.sigmoid(self.map2(x)) return self.map3(x) """ Explanation: 3.) G: 该生成器是个标准的前馈图(feedforward graph)——两层隐层,三个线性映射(linear maps)。我们使用了 ELU (exponential linear unit)。G 将从 I 获得平均分布的数据样本,然后找到某种方式来模仿 R 中标准分布的样本。 End of explanation """ class Discriminator(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(Discriminator, self).__init__() self.map1 = nn.Linear(input_size, hidden_size) self.map2 = nn.Linear(hidden_size, hidden_size) self.map3 = nn.Linear(hidden_size, output_size) def forward(self, x): x = F.elu(self.map1(x)) x = F.elu(self.map2(x)) return F.sigmoid(self.map3(x)) # 还有一些其他的样板代码 def extract(v): return v.data.storage().tolist() def stats(d): return [np.mean(d), np.std(d)] def decorate_with_diffs(data, exponent): mean = torch.mean(data.data, 1, keepdim=True) mean_broadcast = torch.mul(torch.ones(data.size()), mean.tolist()[0][0]) diffs = torch.pow(data - Variable(mean_broadcast), exponent) return torch.cat([data, diffs], 1) d_sampler = get_distribution_sampler(data_mean, data_stddev) gi_sampler = get_generator_input_sampler() G = Generator(input_size=g_input_size, hidden_size=g_hidden_size, output_size=g_output_size) D = Discriminator(input_size=d_input_func(d_input_size), hidden_size=d_hidden_size, output_size=d_output_size) criterion = nn.BCELoss() # Binary cross entropy: http://pytorch.org/docs/nn.html#bceloss d_optimizer = optim.Adam(D.parameters(), lr=d_learning_rate, betas=optim_betas) g_optimizer = optim.Adam(G.parameters(), lr=g_learning_rate, betas=optim_betas) """ Explanation: 4.) D: 判别器的代码和 G 的生成器代码很接近。一个有两层隐层和三个线性映射的前馈图。它会从 R 或 G 那里获得样本,然后输出 0 或 1 的判别值,对应反例和正例。这几乎是神经网络的最弱版本了。 End of explanation """ for epoch in range(num_epochs): for d_index in range(d_steps): # 1. Train D on real+fake D.zero_grad() # 1A: Train D on real d_real_data = Variable(d_sampler(d_input_size)) d_real_decision = D(preprocess(d_real_data)) d_real_error = criterion(d_real_decision, Variable(torch.ones(1))) # ones = true d_real_error.backward() # compute/store gradients, but don't change params # 1B: Train D on fake d_gen_input = Variable(gi_sampler(minibatch_size, g_input_size)) d_fake_data = G(d_gen_input).detach() # detach to avoid training G on these labels d_fake_decision = D(preprocess(d_fake_data.t())) d_fake_error = criterion(d_fake_decision, Variable(torch.zeros(1))) # zeros = fake d_fake_error.backward() d_optimizer.step() # Only optimizes D's parameters; changes based on stored gradients from backward() for g_index in range(g_steps): # 2. Train G on D's response (but DO NOT train D on these labels) G.zero_grad() gen_input = Variable(gi_sampler(minibatch_size, g_input_size)) g_fake_data = G(gen_input) dg_fake_decision = D(preprocess(g_fake_data.t())) g_error = criterion(dg_fake_decision, Variable(torch.ones(1))) # we want to fool, so pretend it's all genuine g_error.backward() g_optimizer.step() # Only optimizes G's parameters if epoch % print_interval == 0: print("epoch: %s : D: %s/%s G: %s (Real: %s, Fake: %s) " % (epoch, extract(d_real_error)[0], extract(d_fake_error)[0], extract(g_error)[0], stats(extract(d_real_data)), stats(extract(d_fake_data)))) """ Explanation: 5.) 最后,训练环在两个模式中变幻:第一步,用被准确标记的真实数据 vs. 假数据训练 D;随后,训练 G 来骗过 D,这里是用的不准确标记。道友们,这是正邪之间的较量。 即便你从没接触过 PyTorch,大概也能明白发生了什么。在第一部分(for d_index in range(d_steps)循环里),我们让两种类型的数据经过 D,并对 D 的猜测 vs. 真实标记执行不同的评判标准。这是 “forward” 那一步;随后我们需要 “backward()” 来计算梯度,然后把这用来在 d_optimizer step() 中更新 D 的参数。这里,G 被使用但尚未被训练。 在最后的部分(for g_index in range(g_steps)循环里),我们对 G 执行同样的操作——注意我们要让 G 的输出穿过 D (这其实是送给造假者一个鉴定专家来练手)。但在这一步,我们并不优化、或者改变 D。我们不想让鉴定者 D 学习到错误的标记。因此,我们只执行 g_optimizer.step()。 End of explanation """
davidbrough1/pymks
notebooks/intro.ipynb
mit
import pymks %matplotlib inline %load_ext autoreload %autoreload 2 """ Explanation: Meet PyMKS In this short introduction, we will demonstrate the functionality in PyMKS. We will quantify microstructures using 2-point statistics, predict effective properties using homogenization and predict local properties using localization. If you would like more technical details about any of these methods please see the theory section. End of explanation """ from pymks.datasets import make_microstructure import numpy as np X_1 = make_microstructure(n_samples=1, grain_size=(25, 25)) X_2 = make_microstructure(n_samples=1, grain_size=(95, 15)) X = np.concatenate((X_1, X_2)) """ Explanation: Quantify Microstructures using 2-Point Statistics Lets make two dual-phase microstructures with different morphologies. End of explanation """ from pymks.tools import draw_microstructures draw_microstructures(X) """ Explanation: Throughout PyMKS X is used to represent microstructures. Now that we have made the two microstructures, lets take a look at them. End of explanation """ from pymks import PrimitiveBasis from pymks.stats import correlate p_basis = PrimitiveBasis(n_states=2, domain=[0, 1]) X_corr = correlate(X, p_basis, periodic_axes=[0, 1]) """ Explanation: We can compute the 2-point statistics for these two periodic microstructures using the correlate function from pymks.stats. This function computes all of the autocorrelations and cross-correlation(s) for a microstructure. Before we compute the 2-point statistics, we will discretize them using the PrimitiveBasis function. End of explanation """ from pymks.tools import draw_correlations print X_corr[0].shape draw_correlations(X_corr[0]) draw_correlations(X_corr[1]) """ Explanation: Let's take a look at the two autocorrelations and the cross-correlation for these two microstructures. End of explanation """ from pymks.datasets import make_elastic_stress_random grain_size = [(47, 6), (4, 49), (14, 14)] n_samples = [200, 200, 200] X_train, y_train = make_elastic_stress_random(n_samples=n_samples, size=(51, 51), grain_size=grain_size, seed=0) """ Explanation: 2-Point statistics provide an object way to compare microstructures, and have been shown as an effective input to machine learning methods. Predict Homogenized Properties In this section of the intro, we are going to predict the effective stiffness for two-phase microstructures using the MKSHomogenizationModel, but we could have chosen any other effective material property. First we need to make some microstructures and their effective stress values to fit our model. Let's create 200 random instances 3 different types of microstructures, totaling to 600 microstructures. End of explanation """ draw_microstructures(X_train[::200]) """ Explanation: Once again, X_train is our microstructures. Throughout PyMKS y is used as either the property, or the field we would like to predict. In this case y_train is the effective stress values for X_train. Let's look at one of each of the three different types of microstructures. End of explanation """ from pymks import MKSHomogenizationModel p_basis = PrimitiveBasis(n_states=2, domain=[0, 1]) homogenize_model = MKSHomogenizationModel(basis=p_basis, periodic_axes=[0, 1], correlations=[(0, 0), (1, 1), (0, 1)]) """ Explanation: The MKSHomogenizationModel uses 2-point statistics, so we need to provide a discretization method for the microstructures by providing a basis function. We will also specify which correlations we want. End of explanation """ homogenize_model.fit(X_train, y_train) """ Explanation: Let's fit our model with the data we created. End of explanation """ n_samples = [10, 10, 10] X_test, y_test = make_elastic_stress_random(n_samples=n_samples, size=(51, 51), grain_size=grain_size, seed=100) """ Explanation: Now let's make some new data to see how good our model is. End of explanation """ y_pred = homogenize_model.predict(X_test) """ Explanation: We will try and predict the effective stress of our X_test microstructures. End of explanation """ from pymks.tools import draw_components_scatter draw_components_scatter([homogenize_model.reduced_fit_data[:,:2], homogenize_model.reduced_predict_data[:,:2]], ['Training Data', 'Test Data']) """ Explanation: The MKSHomogenizationModel generates low dimensional representations of microstructures and regression methods to predict effective properties. Take a look at the low-dimensional representations. End of explanation """ from pymks.tools import draw_goodness_of_fit fit_data = np.array([y_train, homogenize_model.predict(X_train)]) pred_data = np.array([y_test, y_pred]) draw_goodness_of_fit(fit_data, pred_data, ['Training Data', 'Test Data']) """ Explanation: Now let's look at a goodness of fit plot for our MKSHomogenizationModel. End of explanation """ from pymks.datasets import make_elastic_FE_strain_delta X_delta, y_delta = make_elastic_FE_strain_delta() """ Explanation: Looks good. The MKSHomogenizationModel can be used to predict effective properties and processing-structure evolutions. Predict Local Properties In this section of the intro, we are going to predict the local strain field in a microstructure using MKSLocalizationModel, but we could have predicted another local property. First we need some data, so let's make some. End of explanation """ from pymks import MKSLocalizationModel p_basis = PrimitiveBasis(n_states=2) localize_model = MKSLocalizationModel(basis=p_basis) """ Explanation: Once again, X_delta is our microstructures and y_delta is our local strain fields. We need to discretize the microstructure again, so we will also use the same basis function. End of explanation """ localize_model.fit(X_delta, y_delta) """ Explanation: Let's use the data to fit our MKSLocalizationModel. End of explanation """ from pymks.datasets import make_elastic_FE_strain_random X_test, y_test = make_elastic_FE_strain_random() """ Explanation: Now that we have fit our model, we will create a random microstructure and compute its local strain field, using finite element analysis. We will then try and reproduce the same strain field with our model. End of explanation """ from pymks.tools import draw_microstructure_strain draw_microstructure_strain(X_test[0], y_test[0]) """ Explanation: Let's look at the microstructure and its local strain field. End of explanation """ from pymks.tools import draw_strains_compare y_pred = localize_model.predict(X_test) draw_strains_compare(y_test[0], y_pred[0]) """ Explanation: Now let's pass that same microstructure to our MKSLocalizationModel and compare the predicted and computed local strain field. End of explanation """
BuzzFeedNews/2015-07-h2-visas-and-enforcement
notebooks/h2-violation-aggregates.ipynb
mit
import pandas as pd import sys sys.path.append("../utils") import loaders """ Explanation: Aggregated H-2 Guest Worker Violations The Python code below loads all WHISARD violations since 2005 (based on the end-date of the violation period); isolates the violations of laws meant to protect H-2 workers; and provides aggregate counts of the number of employers, certain violations, and workers. Methodology Load all violations, and limit them to those that meet all of the following critera: (a) DATE_END_VIOL_YEAR is 2005 or later; (b) Classified as having an ACT_ID of "H2A" or "H2B"; and (c) has an E (employee) record flag, as opposed to an R (employer) record flag. Group all of these violations by their violation "description." Count the number of matching violations for each description. Identify violations that pertain to U.S. workers, rather than guest workers, and exclude them from the analysis. Identify violations that pertain to underpaying guest workers. Calculate the number of workers affected by each set of violations, and the number of employers named (based on the first available of the following: federal EIN, legal name, trade name). Data loading End of explanation """ employers = loaders.load_employers().set_index("CASE_ID") violations = loaders.load_violations().set_index("CASE_ID") joined = violations\ .join(employers[[ "ER_EIN", "employer_id" ]]) # Get H-2A and H-2B violations from those cases h2_employee_violations = joined[ (joined["DATE_END_VIOL_YEAR"] >= 2005) & (joined["ACT_ID"].isin([ "H2A", "H2B" ])) & (joined["violation_found"] == True) & (joined["ER_EE_VIOL"] == "E") # E = "Employee" ] """ Explanation: Note: loaders is a custom module to handle most common data-loading operations in these analyses. It is available here. End of explanation """ by_act_and_description = h2_employee_violations.groupby([ "VIOLATION_DESC", "ACT_ID" ]) violation_counts = by_act_and_description\ .size()\ .unstack()\ .fillna(0)\ .sort([ "H2A", "H2B" ], ascending=False) violation_counts """ Explanation: List of violation counts by description and ACT_ID End of explanation """ non_guestworker_descs = [ "17 Preferential treatment given to H-2A workers", "02 Unlawful rejection of US workers (2008 & 2010 Rules)", "Requirement to Hire U.S. Workers - ER failed to properly hire or rehire U.S. workers", "Layoff- ER improperly laid off similarly employed U.S. workers within 120 days of date of need, unless employee refused or was lawfully rejected", "Job Opportunity - (U.S. workers) - ER failed to offer U.S. workers bona fide, full-time temp. position due to inequitable qualification requirements", "Terms and Working Conditions for U.S. Workers - ER failed to offer terms and working conditions as required" ] guestworker_wage_viols = [ "27 Failed to pay required rate(s) of pay (2008 & 2010 Rules)", "05 Failed to pay proper rate", "07 Illegal deductions", "28 Unlawful deductions (2008 & 2010 Rules)", "06 Failed to pay 3/4 guarantee", "09 Illegal charges for housing", "19 Failed to comply - 3/4-guarantee req (2008 & 2010 Rules)", "09 Unlawful charges for public housing (2008 & 2010 Rules)", "Offered Wage- failed to pay the offered wage rate which equals or exceeds the highest of the prevailing wage, Federal, State, or local minimum wage", "Wages - Prohibited Fees - ER sought or required workers to pay prohibited fees or expenses related to the TEC (petition/agent/attorney/recruitment)", "Impermissible Deductions - ER failed to specify deductions from pay.", "Incentive Wage - offered wage based on incentives failed to equal or exceed highest of the PW/Fed./State/local MW on a weekly/bi-weekly/monthly basis.", "Back Wages due - failure to offer worker bona fide, full-time temporary position comparable to U.S. workers similarly employed Attestation 1" ] # Make sure that we've correctly transcribed the violation descriptions assert((violation_counts.ix[non_guestworker_descs].sum(axis=1) > 0).mean() == 1) assert((violation_counts.ix[guestworker_wage_viols].sum(axis=1) > 0).mean() == 1) h2_guestworker_violations = h2_employee_violations[ ~h2_employee_violations["VIOLATION_DESC"].isin(non_guestworker_descs) ] h2_guestworker_wage_violations = h2_employee_violations[ h2_employee_violations["VIOLATION_DESC"].isin(guestworker_wage_viols) ] """ Explanation: Violation description categories End of explanation """ h2_guestworker_violations["employer_id"].nunique() h2_guestworker_violations[ (h2_guestworker_violations["employer_id"] == h2_guestworker_violations["ER_EIN"]) ]["ER_EIN"].nunique() """ Explanation: Calculations Counts of employers found to have violated laws designed to protect H-2 guest workers: End of explanation """ h2_guestworker_violations["CASE_EER_ID"].nunique() """ Explanation: Note: The first count above uses employers' legal or trade names if their case data does not include an EIN. The second count includes only employers with EINs. Count of such workers violated: End of explanation """ h2_guestworker_wage_violations["CASE_EER_ID"].nunique() """ Explanation: Note: Individual workers are uniquely identified on a per-case basis, but are not tracked across cases or employers. Count of H-2 workers being paid less than the promised wage: End of explanation """
GoogleCloudPlatform/mlops-on-gcp
immersion/tfx_pipelines/01-walkthrough/labs/lab-01.ipynb
apache-2.0
import absl import os import tempfile import time import tensorflow as tf import tensorflow_data_validation as tfdv import tensorflow_model_analysis as tfma import tensorflow_transform as tft import tfx from pprint import pprint from tensorflow_metadata.proto.v0 import schema_pb2, statistics_pb2, anomalies_pb2 from tensorflow_transform.tf_metadata import schema_utils from tfx.components import CsvExampleGen from tfx.components import Evaluator from tfx.components import ExampleValidator from tfx.components import InfraValidator from tfx.components import Pusher from tfx.components import ResolverNode from tfx.components import SchemaGen from tfx.components import StatisticsGen from tfx.components import Trainer from tfx.components import Transform from tfx.components import Tuner from tfx.dsl.components.base import executor_spec from tfx.components.common_nodes.importer_node import ImporterNode from tfx.components.trainer import executor as trainer_executor from tfx.dsl.experimental import latest_blessed_model_resolver from tfx.orchestration import metadata from tfx.orchestration import pipeline from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext from tfx.proto import evaluator_pb2 from tfx.proto import example_gen_pb2 from tfx.proto import infra_validator_pb2 from tfx.proto import pusher_pb2 from tfx.proto import trainer_pb2 from tfx.proto.evaluator_pb2 import SingleSlicingSpec from tfx.types import Channel from tfx.types.standard_artifacts import Model from tfx.types.standard_artifacts import HyperParameters from tfx.types.standard_artifacts import ModelBlessing from tfx.types.standard_artifacts import InfraBlessing """ Explanation: TFX Components Walk-through Learning Objectives Develop a high level understanding of TFX pipeline components. Learn how to use a TFX Interactive Context for prototype development of TFX pipelines. Work with the Tensorflow Data Validation (TFDV) library to check and analyze input data. Utilize the Tensorflow Transform (TFT) library for scalable data preprocessing and feature transformations. Employ the Tensorflow Model Analysis (TFMA) library for model evaluation. In this lab, you will work with the Covertype Data Set and use TFX to analyze, understand, and pre-process the dataset and train, analyze, validate, and deploy a multi-class classification model to predict the type of forest cover from cartographic features. You will utilize TFX Interactive Context to work with the TFX components interactivelly in a Jupyter notebook environment. Working in an interactive notebook is useful when doing initial data exploration, experimenting with models, and designing ML pipelines. You should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts. In a production deployment of TFX on GCP, you will use an orchestrator such as Kubeflow Pipelines, or Cloud Composer. In an interactive mode, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells. In a production deployment, ML Metadata will be managed in a scalabe database like MySQL, and artifacts in apersistent store such as Google Cloud Storage. In an interactive mode, both properties and payloads are stored in a local file system of the Jupyter host. Setup Note: Currently, TFMA visualizations do not render properly in JupyterLab. It is recommended to run this notebook in Jupyter Classic Notebook. To switch to Classic Notebook select Launch Classic Notebook from the Help menu. End of explanation """ print("Tensorflow Version:", tf.__version__) print("TFX Version:", tfx.__version__) print("TFDV Version:", tfdv.__version__) print("TFMA Version:", tfma.VERSION_STRING) absl.logging.set_verbosity(absl.logging.INFO) """ Explanation: Note: this lab was developed and tested with the following TF ecosystem package versions: Tensorflow Version: 2.3.1 TFX Version: 0.25.0 TFDV Version: 0.25.0 TFMA Version: 0.25.0 If you encounter errors with the above imports (e.g. TFX component not found), check your package versions in the cell below. End of explanation """ os.environ['PATH'] += os.pathsep + '/home/jupyter/.local/bin' """ Explanation: If the versions above do not match, update your packages in the current Jupyter kernel below. The default %pip package installation location is not on your system installation PATH; use the command below to append the local installation path to pick up the latest package versions. Note that you may also need to restart your notebook kernel to pick up the specified package versions and re-run the imports cell above before proceeding with the lab. End of explanation """ ARTIFACT_STORE = os.path.join(os.sep, 'home', 'jupyter', 'artifact-store') SERVING_MODEL_DIR=os.path.join(os.sep, 'home', 'jupyter', 'serving_model') DATA_ROOT = 'gs://workshop-datasets/covertype/small' """ Explanation: Configure lab settings Set constants, location paths and other environment settings. End of explanation """ PIPELINE_NAME = 'tfx-covertype-classifier' PIPELINE_ROOT = os.path.join(ARTIFACT_STORE, PIPELINE_NAME, time.strftime("%Y%m%d_%H%M%S")) os.makedirs(PIPELINE_ROOT, exist_ok=True) context = InteractiveContext( pipeline_name=PIPELINE_NAME, pipeline_root=PIPELINE_ROOT, metadata_connection_config=None) """ Explanation: Creating Interactive Context TFX Interactive Context allows you to create and run TFX Components in an interactive mode. It is designed to support experimentation and development in a Jupyter Notebook environment. It is an experimental feature and major changes to interface and functionality are expected. When creating the interactive context you can specifiy the following parameters: - pipeline_name - Optional name of the pipeline for ML Metadata tracking purposes. If not specified, a name will be generated for you. - pipeline_root - Optional path to the root of the pipeline's outputs. If not specified, an ephemeral temporary directory will be created and used. - metadata_connection_config - Optional metadata_store_pb2.ConnectionConfig instance used to configure connection to a ML Metadata connection. If not specified, an ephemeral SQLite MLMD connection contained in the pipeline_root directory with file name "metadata.sqlite" will be used. End of explanation """ output_config = example_gen_pb2.Output( split_config=example_gen_pb2.SplitConfig(splits=[ # TODO: Your code to configure train data split # TODO: Your code to configure eval data split ])) example_gen = tfx.components.CsvExampleGen( input_base=DATA_ROOT, output_config=output_config) context.run(example_gen) """ Explanation: Ingesting data using ExampleGen In any ML development process the first step is to ingest the training and test datasets. The ExampleGen component ingests data into a TFX pipeline. It consumes external files/services to generate a set file files in the TFRecord format, which will be used by other TFX components. It can also shuffle the data and split into an arbitrary number of partitions. <img src=https://github.com/GoogleCloudPlatform/mlops-on-gcp/raw/master/images/ExampleGen.png width="300"> Configure and run CsvExampleGen In this exercise, you use the CsvExampleGen specialization of ExampleGen to ingest CSV files from a GCS location and emit them as tf.Example records for consumption by downstream TFX pipeline components. Your task is to configure the component to create 80-20 train and eval splits. Hint: review the ExampleGen proto definition to split your data with hash buckets. End of explanation """ examples_uri = example_gen.outputs['examples'].get()[0].uri tfrecord_filenames = [os.path.join(examples_uri, 'train', name) for name in os.listdir(os.path.join(examples_uri, 'train'))] dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP") for tfrecord in dataset.take(2): example = tf.train.Example() example.ParseFromString(tfrecord.numpy()) for name, feature in example.features.feature.items(): if feature.HasField('bytes_list'): value = feature.bytes_list.value if feature.HasField('float_list'): value = feature.float_list.value if feature.HasField('int64_list'): value = feature.int64_list.value print('{}: {}'.format(name, value)) print('******') """ Explanation: Examine the ingested data End of explanation """ statistics_gen = tfx.components.StatisticsGen( examples=example_gen.outputs['examples']) context.run(statistics_gen) """ Explanation: Generating statistics using StatisticsGen The StatisticsGen component generates data statistics that can be used by other TFX components. StatisticsGen uses TensorFlow Data Validation. StatisticsGen generate statistics for each split in the ExampleGen component's output. In our case there two splits: train and eval. <img src=https://github.com/GoogleCloudPlatform/mlops-on-gcp/raw/master/images/StatisticsGen.png width="200"> Configure and run the StatisticsGen component End of explanation """ context.show(statistics_gen.outputs['statistics']) """ Explanation: Visualize statistics The generated statistics can be visualized using the tfdv.visualize_statistics() function from the TensorFlow Data Validation library or using a utility method of the InteractiveContext object. In fact, most of the artifacts generated by the TFX components can be visualized using InteractiveContext. End of explanation """ schema_gen = SchemaGen( statistics=statistics_gen.outputs['statistics'], infer_feature_shape=False) context.run(schema_gen) """ Explanation: Infering data schema using SchemaGen Some TFX components use a description input data called a schema. The schema is an instance of schema.proto. It can specify data types for feature values, whether a feature has to be present in all examples, allowed value ranges, and other properties. SchemaGen automatically generates the schema by inferring types, categories, and ranges from data statistics. The auto-generated schema is best-effort and only tries to infer basic properties of the data. It is expected that developers review and modify it as needed. SchemaGen uses TensorFlow Data Validation. The SchemaGen component generates the schema using the statistics for the train split. The statistics for other splits are ignored. <img src=https://github.com/GoogleCloudPlatform/mlops-on-gcp/raw/master/images/SchemaGen.png width="200"> Configure and run the SchemaGen components End of explanation """ context.show(schema_gen.outputs['schema']) """ Explanation: Visualize the inferred schema End of explanation """ schema_proto_path = '{}/{}'.format(schema_gen.outputs['schema'].get()[0].uri, 'schema.pbtxt') schema = tfdv.load_schema_text(schema_proto_path) """ Explanation: Updating the auto-generated schema In most cases the auto-generated schemas must be fine-tuned manually using insights from data exploration and/or domain knowledge about the data. For example, you know that in the covertype dataset there are seven types of forest cover (coded using 1-7 range) and that the value of the Slope feature should be in the 0-90 range. You can manually add these constraints to the auto-generated schema by setting the feature domain. Load the auto-generated schema proto file End of explanation """ # TODO: Your code to restrict the categorical feature Cover_Type between the values of 0 and 6. # TODO: Your code to restrict the numeric feature Slope between 0 and 90. tfdv.display_schema(schema=schema) """ Explanation: Modify the schema You can use the protocol buffer APIs to modify the schema. Hint: Review the TFDV library API documentation on setting a feature's domain. You can use the protocol buffer APIs to modify the schema. Review the Tensorflow Metadata proto definition for configuration options. End of explanation """ schema_dir = os.path.join(ARTIFACT_STORE, 'schema') tf.io.gfile.makedirs(schema_dir) schema_file = os.path.join(schema_dir, 'schema.pbtxt') tfdv.write_schema_text(schema, schema_file) !cat {schema_file} """ Explanation: Save the updated schema End of explanation """ schema_importer = ImporterNode( instance_name='Schema_Importer', source_uri=schema_dir, artifact_type=tfx.types.standard_artifacts.Schema, reimport=False) context.run(schema_importer) """ Explanation: Importing the updated schema using ImporterNode The ImporterNode component allows you to import an external artifact, including the schema file, so it can be used by other TFX components in your workflow. Configure and run the ImporterNode component End of explanation """ context.show(schema_importer.outputs['result']) """ Explanation: Visualize the imported schema End of explanation """ # TODO: Complete ExampleValidator # Hint: review the visual above and review the documentation on ExampleValidator's inputs and outputs: # https://www.tensorflow.org/tfx/guide/exampleval # Make sure you use the output of the schema_importer component created above. example_validator = ExampleValidator() context.run(example_validator) """ Explanation: Validating data with ExampleValidator The ExampleValidator component identifies anomalies in data. It identifies anomalies by comparing data statistics computed by the StatisticsGen component against a schema generated by SchemaGen or imported by ImporterNode. ExampleValidator can detect different classes of anomalies. For example it can: perform validity checks by comparing data statistics against a schema detect training-serving skew by comparing training and serving data. detect data drift by looking at a series of data. The ExampleValidator component validates the data in the eval split only. Other splits are ignored. <img src=https://github.com/GoogleCloudPlatform/mlops-on-gcp/raw/master/images/ExampleValidator.png width="350"> Configure and run the ExampleValidator component End of explanation """ train_uri = example_validator.outputs['anomalies'].get()[0].uri train_anomalies_filename = os.path.join(train_uri, "train/anomalies.pbtxt") !cat $train_anomalies_filename """ Explanation: Examine the output of ExampleValidator The output artifact of the ExampleValidator is the anomalies.pbtxt file describing an anomalies_pb2.Anomalies protobuf. End of explanation """ context.show(example_validator.outputs['output']) """ Explanation: Visualize validation results The file anomalies.pbtxt can be visualized using context.show. End of explanation """ TRANSFORM_MODULE = 'preprocessing.py' !cat {TRANSFORM_MODULE} """ Explanation: In our case no anomalies were detected in the eval split. For a detailed deep dive into data validation and schema generation refer to the lab-31-tfdv-structured-data lab. Preprocessing data with Transform The Transform component performs data transformation and feature engineering. The Transform component consumes tf.Examples emitted from the ExampleGen component and emits the transformed feature data and the SavedModel graph that was used to process the data. The emitted SavedModel can then be used by serving components to make sure that the same data pre-processing logic is applied at training and serving. The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with. It requires code files to be available which define the processing needed. <img src=https://github.com/GoogleCloudPlatform/mlops-on-gcp/raw/master/images/Transform.png width="400"> Define the pre-processing module To configure Trainsform, you need to encapsulate your pre-processing code in the Python preprocessing_fn function and save it to a python module that is then provided to the Transform component as an input. This module will be loaded by transform and the preprocessing_fn function will be called when the Transform component runs. In most cases, your implementation of the preprocessing_fn makes extensive use of TensorFlow Transform for performing feature engineering on your dataset. End of explanation """ transform = Transform( examples=example_gen.outputs['examples'], schema=schema_importer.outputs['result'], module_file=TRANSFORM_MODULE) context.run(transform) """ Explanation: Configure and run the Transform component. End of explanation """ os.listdir(transform.outputs['transform_graph'].get()[0].uri) """ Explanation: Examine the Transform component's outputs The Transform component has 2 outputs: transform_graph - contains the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models). transformed_examples - contains the preprocessed training and evaluation data. Take a peek at the transform_graph artifact: it points to a directory containing 3 subdirectories: End of explanation """ os.listdir(transform.outputs['transformed_examples'].get()[0].uri) transform_uri = transform.outputs['transformed_examples'].get()[0].uri tfrecord_filenames = [os.path.join(transform_uri, 'train', name) for name in os.listdir(os.path.join(transform_uri, 'train'))] dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP") for tfrecord in dataset.take(2): example = tf.train.Example() example.ParseFromString(tfrecord.numpy()) for name, feature in example.features.feature.items(): if feature.HasField('bytes_list'): value = feature.bytes_list.value if feature.HasField('float_list'): value = feature.float_list.value if feature.HasField('int64_list'): value = feature.int64_list.value print('{}: {}'.format(name, value)) print('******') """ Explanation: And the transform.examples artifact End of explanation """ TRAINER_MODULE_FILE = 'model.py' !cat {TRAINER_MODULE_FILE} """ Explanation: Train your TensorFlow model with the Trainer component The Trainer component trains a model using TensorFlow. Trainer takes: tf.Examples used for training and eval. A user provided module file that defines the trainer logic. A data schema created by SchemaGen or imported by ImporterNode. A proto definition of train args and eval args. An optional transform graph produced by upstream Transform component. An optional base models used for scenarios such as warmstarting training. <img src=https://github.com/GoogleCloudPlatform/mlops-on-gcp/raw/master/images/Trainer.png width="400"> Define the trainer module To configure Trainer, you need to encapsulate your training code in a Python module that is then provided to the Trainer as an input. End of explanation """ trainer = Trainer( custom_executor_spec=executor_spec.ExecutorClassSpec(trainer_executor.GenericExecutor), module_file=TRAINER_MODULE_FILE, transformed_examples=transform.outputs.transformed_examples, schema=schema_importer.outputs.result, transform_graph=transform.outputs.transform_graph, train_args=trainer_pb2.TrainArgs(splits=['train'], num_steps=5000), eval_args=trainer_pb2.EvalArgs(splits=['eval'], num_steps=1000)) context.run(trainer) """ Explanation: Create and run the Trainer component As of the 0.25.0 release of TFX, the Trainer component only supports passing a single field - num_steps - through the train_args and eval_args arguments. End of explanation """ logs_path = trainer.outputs['model_run'].get()[0].uri print(logs_path) """ Explanation: Analyzing training runs with TensorBoard In this step you will analyze the training run with TensorBoard.dev. TensorBoard.dev is a managed service that enables you to easily host, track and share your ML experiments. Retrieve the location of TensorBoard logs Each model run's train and eval metric logs are written to the model_run directory by the Tensorboard callback defined in model.py. End of explanation """ tuner = Tuner( module_file=TRAINER_MODULE_FILE, examples=transform.outputs['transformed_examples'], transform_graph=transform.outputs['transform_graph'], train_args=trainer_pb2.TrainArgs(num_steps=1000), eval_args=trainer_pb2.EvalArgs(num_steps=500)) context.run(tuner) """ Explanation: Upload the logs and start TensorBoard.dev Open a new JupyterLab terminal window From the terminal window, execute the following command tensorboard dev upload --logdir [YOUR_LOGDIR] Where [YOUR_LOGDIR] is an URI retrieved by the previous cell. You will be asked to authorize TensorBoard.dev using your Google account. If you don't have a Google account or you don't want to authorize TensorBoard.dev you can skip this exercise. After the authorization process completes, follow the link provided to view your experiment. Tune your model's hyperparameters with the Tuner component The Tuner component makes use of the Python KerasTuner API to tune your model's hyperparameters. It tighty integrates with the Transform and Trainer components for model hyperparameter tuning in continuous training pipelines as well as advanced use cases such as feature selection, feature engineering, and model architecture search. <img src=https://github.com/GoogleCloudPlatform/mlops-on-gcp/raw/master/images/Tuner_Overview.png width="400"> Tuner takes: A user provided module file (or module fn) that defines the tuning logic, including model definition, hyperparameter search space, objective etc. tf.Examples used for training and eval. Protobuf definition of train args and eval args. (Optional) Protobuf definition of tuning args. (Optional) transform graph produced by an upstream Transform component. (Optional) A data schema created by a SchemaGen pipeline component and optionally altered by the developer. <img src=https://github.com/GoogleCloudPlatform/mlops-on-gcp/raw/master/images/Tuner.png width="400"> With the given data, model, and objective, Tuner tunes the hyperparameters and emits the best results that can be directly fed into the Trainer component during model re-training. End of explanation """ hparams_importer = ImporterNode( instance_name='import_hparams', # This can be Tuner's output file or manually edited file. The file contains # text format of hyperparameters (kerastuner.HyperParameters.get_config()) source_uri=tuner.outputs.best_hyperparameters.get()[0].uri, artifact_type=HyperParameters) context.run(hparams_importer) # TODO: your code to retrain your model with the best hyperparameters found by the Tuner component above. # Hint: review the Trainer code above in this notebook and the documentation for how to configure the trainer # to use the output artifact from the hparams_importer. trainer = Trainer() context.run(trainer) """ Explanation: Retrain your model by running Tuner with the best hyperparameters End of explanation """ model_resolver = ResolverNode( instance_name='latest_blessed_model_resolver', resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver, model=Channel(type=Model), model_blessing=Channel(type=ModelBlessing)) context.run(model_resolver) """ Explanation: Evaluating trained models with Evaluator The Evaluator component analyzes model performance using the TensorFlow Model Analysis library. It runs inference requests on particular subsets of the test dataset, based on which slices are defined by the developer. Knowing which slices should be analyzed requires domain knowledge of what is important in this particular use case or domain. The Evaluator can also optionally validate a newly trained model against a previous model. In this lab, you only train one model, so the Evaluator automatically will label the model as "blessed". <img src=https://github.com/GoogleCloudPlatform/mlops-on-gcp/raw/master/images/Evaluator.png width="400"> Configure and run the Evaluator component Use the ResolverNode to pick the previous model to compare against. The model resolver is only required if performing model validation in addition to evaluation. In this case we validate against the latest blessed model. If no model has been blessed before (as in this case) the evaluator will make our candidate the first blessed model. End of explanation """ # TODO: Your code here to create a tfma.MetricThreshold. # Review the API documentation here: https://www.tensorflow.org/tfx/model_analysis/api_docs/python/tfma/MetricThreshold # Hint: Review the API documentation for tfma.GenericValueThreshold to constrain accuracy between 50% and 99%. accuracy_threshold = metrics_specs = tfma.MetricsSpec( metrics = [ tfma.MetricConfig(class_name='SparseCategoricalAccuracy', threshold=accuracy_threshold), tfma.MetricConfig(class_name='ExampleCount')]) eval_config = tfma.EvalConfig( model_specs=[ tfma.ModelSpec(label_key='Cover_Type') ], metrics_specs=[metrics_specs], slicing_specs=[ tfma.SlicingSpec(), tfma.SlicingSpec(feature_keys=['Wilderness_Area']) ] ) eval_config model_analyzer = Evaluator( examples=example_gen.outputs.examples, model=trainer.outputs.model, baseline_model=model_resolver.outputs.model, eval_config=eval_config ) context.run(model_analyzer, enable_cache=False) """ Explanation: Configure evaluation metrics and slices. End of explanation """ model_blessing_uri = model_analyzer.outputs.blessing.get()[0].uri !ls -l {model_blessing_uri} """ Explanation: Check the model performance validation status End of explanation """ evaluation_uri = model_analyzer.outputs['evaluation'].get()[0].uri evaluation_uri !ls {evaluation_uri} eval_result = tfma.load_eval_result(evaluation_uri) eval_result tfma.view.render_slicing_metrics(eval_result) tfma.view.render_slicing_metrics( eval_result, slicing_column='Wilderness_Area') """ Explanation: Visualize evaluation results You can visualize the evaluation results using the tfma.view.render_slicing_metrics() function from TensorFlow Model Analysis library. Setup Note: Currently, TFMA visualizations don't render in JupyterLab. Make sure that you run this notebook in Classic Notebook. End of explanation """ infra_validator = InfraValidator( model=trainer.outputs['model'], examples=example_gen.outputs['examples'], serving_spec=infra_validator_pb2.ServingSpec( tensorflow_serving=infra_validator_pb2.TensorFlowServing( tags=['latest']), local_docker=infra_validator_pb2.LocalDockerConfig(), ), validation_spec=infra_validator_pb2.ValidationSpec( max_loading_time_seconds=60, num_tries=5, ), request_spec=infra_validator_pb2.RequestSpec( tensorflow_serving=infra_validator_pb2.TensorFlowServingRequestSpec(), num_examples=5, ) ) context.run(infra_validator, enable_cache=False) """ Explanation: InfraValidator The InfraValidator component acts as an additional early warning layer by validating a candidate model in a sandbox version of its serving infrastructure to prevent an unservable model from being pushed to production. Compared to the Evaluator component above which validates a model's performance, the InfraValidator component is validating that a model is able to generate predictions from served examples in an environment configured to match production. The config below takes a model and examples, launches the model in a sand-boxed TensorflowServing model server from the latest image in a local docker engine, and optionally checks that the model binary can be loaded and queried before "blessing" it for production. <img src=https://github.com/GoogleCloudPlatform/mlops-on-gcp/raw/master/images/InfraValidator.png width="400"> End of explanation """ infra_blessing_uri = infra_validator.outputs.blessing.get()[0].uri !ls -l {infra_blessing_uri} """ Explanation: Check the model infrastructure validation status End of explanation """ trainer.outputs['model'] pusher = Pusher( model=trainer.outputs['model'], model_blessing=model_analyzer.outputs['blessing'], infra_blessing=infra_validator.outputs['blessing'], push_destination=pusher_pb2.PushDestination( filesystem=pusher_pb2.PushDestination.Filesystem( base_directory=SERVING_MODEL_DIR))) context.run(pusher) """ Explanation: Deploying models with Pusher The Pusher component checks whether a model has been "blessed", and if so, deploys it by pushing the model to a well known file destination. <img src=https://github.com/GoogleCloudPlatform/mlops-on-gcp/raw/master/images/Pusher.png width="400"> Configure and run the Pusher component End of explanation """ pusher.outputs # Set `PATH` to include a directory containing `saved_model_cli. PATH=%env PATH %env PATH=/opt/conda/envs/tfx/bin:{PATH} latest_pushed_model = os.path.join(SERVING_MODEL_DIR, max(os.listdir(SERVING_MODEL_DIR))) !saved_model_cli show --dir {latest_pushed_model} --all """ Explanation: Examine the output of Pusher End of explanation """
cosmolejo/Fisica-Experimental-3
Calculo_Error/Poisson/Poisson.ipynb
gpl-3.0
dado = np.array([5, 3, 3, 2, 5, 1, 2, 3, 6, 2, 1, 3, 6, 6, 2, 2, 5, 6, 4, 2, 1, 3, 4, 2, 2, 5, 3, 3, 2, 2, 2, 1, 6, 2, 2, 6, 1, 3, 3, 3, 4, 4, 6, 6, 1, 2, 2, 6, 1, 4, 2, 5, 3, 6, 6, 3, 5, 2, 2, 4, 2, 2, 4, 4, 3, 3, 1, 2, 6, 1, 3, 3, 5, 4, 6, 6, 4, 2, 5, 6, 1, 4, 5, 4, 3, 5, 4, 1, 4, 6, 6, 6, 3, 1, 5, 6, 4, 3, 4, 6, 3, 5, 2, 6, 3, 6, 1, 4, 3, 4, 1]) suma = np.array([8, 5, 6, 5, 8, 4, 12, 4, 11, 6, 4, 6, 7, 6, 4, 3, 8, 8, 4, 6, 8, 12, 3, 8, 5, 7, 9, 9, 7, 6, 4, 8, 6, 3, 7, 6, 9, 12, 6, 11, 5, 9, 8, 5, 10, 12, 4, 11, 7, 10, 8, 8, 9, 7, 7, 5]) prob = 10./36 # probabilidad de sacar una suma inferior a 6 #prob = 6./21 # probabilidad de sacar una suma inferior a 6 #np.where(suma[0:8]<6) """ Explanation: ANÁLISIS ESTADÍSTICO DE DATOS: Distribuciones discretas Material en construcción, no ha sido revisado por pares. Última revisión: agosto 2016, Edgar Rueda Referencias bibliográficas García, F. J. G., López, N. C., & Calvo, J. Z. (2009). Estadística básica para estudiantes de ciencias. Squires, G. L. (2001). Practical physics. Cambridge university press. Conjunto de datos Para esta sección haremos uso de dos conjuntos de datos, el primero se obtiene a partir del lanzamiento de un dado. El segundo conjunto corresponde a la suma de los dados por cada lanzamiento (se lanzan dos dados al mismo tiempo). End of explanation """ mediaS = suma.size*prob # media de la distribución binomial devS = np.sqrt(suma.size*prob*(1.-prob)) # desviación estándar de la distribución binomial real = np.where(suma<6) # where entrega la info en un tuple de una posición donde está el array real = real[0] # extraemos la información del tuple en la posición uno y la guardamos en real duda = 16 # x, número de éxitos cuya probabilidad se quiere conocer Prob = 0 # probabilidad de tener un número de éxitos inferior o igual a duda for cont in range(0,duda): Prob = Prob + (math.factorial(suma.size)/(math.factorial(cont)*math.factorial(suma.size - cont))) \ *prob**cont*(1.-prob)**(suma.size-cont) print('La probabilidad de que la suma sea inferior a 6 es %.2f' % prob) print('Número total de pruebas igual a %d' % suma.size) print('Suma promedio igual a %.1f' %mediaS) print('Desviación estándar de la suma = %.1f' % devS) print('Número de veces que suma menos de 6 en la muestra es %.1f' % real.size) print('La probabilidad de que el número de éxitos en una muestra de %d sea \ inferior o igual a %d, donde el éxito es que la suma sea inferior a 6, es %.4f' %(suma.size,duda,Prob)) """ Explanation: PARA RECORDAR Distribución binomial Se denomina proceso de Bernoulli aquel experimento que consiste en repetir n veces una prueba, cada una independiente, donde el resultado se clasifica como éxito o fracaso (excluyente). La probabilidad de éxito se denota por $p$. Se define la $\textbf{variable aleatoria binomial}$ como la función que dá el número de éxitos en un proceso de Bernoulli. La variable aleatoria $X$ tomará valores $X = {0,1,2,...,n}$ para un experimento con n pruebas. La distribución binomial (distribución de probabilidad) se representa como: $$f(x) = P(X = x) = b(x;n,p)$$ Note que para calcular la probabilidad, debido a la independiencia de las pruebas, basta con multiplicar la probabilidad de los éxitos por la probabilidad de los fracasos, $p^x q^{n-x}$, y este valor multiplicarlo por el número posible de disposiciones en los que salgan los éxitos (permutaciones), $$b(x;n,p) = \frac{n!}{x!(n-x)!}p^x q^{n-x}$$ La probabilidad de que $X$ sea menor a un valor $x$ determinado es: $$P(X \leq x) = B(x;n,p) = \sum_{r = 0}^x b(r;n,p)$$ La media es $\mu = np$ y la desviación estándar es $\sigma = \sqrt{npq}$ donde $q = 1 - p$. Una propiedad importante de la distribución binomial es que será simétrica si $p=q$, y con asimetría a la derecha cuando $p<q$. Del conjunto de datos que se obtienen de la suma de dos dados, tenemos: End of explanation """ n = suma.size p = prob x = np.arange(0,30) histB = stats.binom.pmf(x, n, p) plt.figure(1) plt.rcParams['figure.figsize'] = 20, 6 # para modificar el tamaño de la figura plt.plot(x, histB, 'bo', ms=8, label='Distribucion binomial') plt.xlabel('Numero de exitos') plt.ylabel('Probabilidad') ProbB = np.sum(histB[0:duda]) print('Probabilidad de que en solo %d ocasiones la suma sea inferior a 6 es %.4f' %(duda,ProbB)) """ Explanation: Usando la función binom de python podemos graficar la función de distribución binomial para este caso. End of explanation """ Ima = misc.imread('HDF-bw.jpg') # Se lee la imagen como matriz en escala de 8 bit plt.rcParams['figure.figsize'] = 20, 6 # para modificar el tamaño de la figura Imab = Ima[100:500,100:700,1] # La imagen original tenía tres canales (RGB); se elige un canal y se recorta plt.figure(2) plt.imshow(Imab, cmap='gray') """ Explanation: $\textbf{FIGURA 1.}$ Distribución binomial para el ejemplo. Efectivamente, se obtuvo la misma probabilidad. Note que si se desconoce la probabilidad $p$ esta se puede determinar si se conoce que la distribución es binomial. Una vez se tiene la probabilidad de éxito se pueden determinar las probabilidades para cualquier cantidad de pruebas. La distribución binomial es de gran utilidad en campos científicos como el control de calidad y las aplicaciones médicas. Distribución de Poisson En un experimento aleatorio en el que se busque medir el número de sucesos o resultados de un tipo que ocurren en un intervalo continuo (número de fotones que llegan a un detector en intervalos de tiempo iguales, número de estrellas en cuadrículas idénticas en el cielo, número de fotones en un modo en un oscilador mecánico cuántico, energía total en un oscilador armónico mecánico cuántico), se le conocerá como proceso de Poisson, y deberá cumplir las siguientes reglas: Los resultados de cada intervalo son independientes. La probabilidad de que un resultado ocurra en un intervalo pequeño es proporcional al tamaño del intervalo. La probabilidad es constante por lo que se puede definir un valor medio de resultados por unidad de intervalo. El proceso es estable. La probabilidad de que ocurra más de un resultado en un intervalo lo suficientemente pequeño es despreciable. El intervalo es tán pequeño que a lo sumo se espera solo un suceso (resultado). La distribución de Poisson es un caso límite de la distribución binomial cuando el número de eventos $N$ tiende a infinito y la probabilidad de acierto $p$ tiende a cero (ver libro de Squire para la deducción). La $\textbf{variable aleatoria de Poisson}$ se define como el número de resultados que aparecen en un experimento que sigue el proceso de Poisson. La distribución de probabilidad asociada se denomina distribución de Poisson y depende solo del parámetro medio de resultados $\lambda$. $$X = (0,1,2,...)$$ $$f(x) = P(X=x) = p(x;\lambda)$$ La expresión para la distribución se obtiene a partir de la binomial (mirar libro de Garcia): $$p(x;\lambda) = \frac{\lambda^x}{x!} e^{- \lambda}$$ con media $\lambda$ y desviación estándar $\sqrt{\lambda}$. End of explanation """ plt.rcParams['figure.figsize'] = 18, 15 # para modificar el tamaño de la figura fil, col = Imab.shape # número de filas y columnas de la imagen numlado = 10 # Número de imágenes por lado contar = 1 plt.figure(5) for enfil in range(1,numlado+1): for encol in range(1,numlado+1): plt.subplot(numlado,numlado,contar) plt.imshow(Imab[(enfil-1)*np.int(fil/numlado):enfil*np.int(fil/numlado), \ (encol-1)*np.int(col/numlado):encol*np.int(col/numlado)],cmap='gray') frame1 = plt.gca() frame1.axes.get_yaxis().set_visible(False) frame1.axes.get_xaxis().set_visible(False) contar = contar + 1 # Para el caso de 7x7 imágenes en gal se presentan el número de galaxias contadas gal = np.array([2., 3., 6., 5., 4., 9., 10., \ 2., 3., 7., 1., 3., 1., 6., \ 6., 5., 4., 3., 4., 2., 4., \ 4., 6., 3., 3., 4., 3., 2., \ 5., 4., 2., 2., 6., 5., 9., \ 4., 7., 2., 3., 3., 3., 5., \ 6., 3., 4., 7., 4., 6., 7.]) la = np.mean(gal) # Valor promedio del conjunto de datos # Distribución del conjunto de datos. La primera fila es el número de galaxias, la segunda es el número de veces que # se repite dicho número de galaxias distriGal = np.array([[0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.],[0., 1., 8., 11., 10., 5., 6., 4., 0., 2., 1.]]) print('Valor promedio del conjunto de datos = %.2f' % la) plt.figure(figsize=(16,9)) plt.plot(distriGal[0,:],distriGal[1,:]/gal.size,'r*',ms=10,label='Distribución datos con promedio %.1f' % la) plt.legend() plt.xlabel('Número de galaxias en el intervalo') plt.ylabel('Rata de ocurrencia') plt.grid() """ Explanation: $\textbf{FIGURA 2.}$ Galaxias en el espacio profundo. End of explanation """ num = 2. # Número de galaxias que se espera encontrar prob = (la**num*np.exp(-la)/math.factorial(num))*100 # Probabilidad de encontrar dicho número de galaxias x = np.arange(0,20) # rango de datos: número de galaxias histP = stats.poisson.pmf(x, la) # función de probabilidad de Poisson ProbP = (np.sum(histP[0:int(num)+1]))*100 # Probabilidad acumulada print('Promedio de galaxias en el área estudiada = %.2f' % la) print('La probabilidad de que se observe en la imagen del espacio profundo %d galaxias es = %.1f%%' % (num,prob)) print('Probabilidad de observar hasta %d galaxias = %.1f%%' %(num,ProbP)) """ Explanation: Si decimos que la distribución que se determinó en el paso anterior es una distribución de Poisson (suposición), podemos decir cosas como: End of explanation """ plt.figure(figsize=(16,9)) plt.plot(x, histP, 'bo--', ms=8, label='Distribución de Poisson con $\lambda=$ %.1f' % la) plt.plot(distriGal[0,:],distriGal[1,:]/gal.size,'r*--',ms=10,label='Conjunto de datos con promedio %.1f' % la) plt.xlabel('Numero de galaxias (sucesos)') plt.ylabel('Rata de ocurrencia') plt.legend() plt.grid() """ Explanation: Comparemos ahora la distribución obtenida con la correspondiente distribución de Poisson: End of explanation """ plt.figure(4) plt.rcParams['figure.figsize'] = 12, 6 # para modificar el tamaño de la figura probP = np.zeros(20) for la in range(1,10,2): for num in range(0,20): probP[num] = la**num*np.exp(-la)/math.factorial(num) plt.plot(probP,marker='.',ms=15,label='$\lambda = %d$' %la) mu = la # media aritmética sigma = np.sqrt(la) # desviación estándar x = np.arange(0,20,1) f = (1./np.sqrt(2*np.pi*sigma**2))*np.exp(-(x-mu)**2/(2*sigma**2)) plt.plot(f,marker='*',ms=10,color='black',label='$ \overline{x} = %d , \ \sigma = %.1f$'%(mu,sigma)) plt.xlabel('Evento') plt.ylabel('Probabilidad') plt.legend() """ Explanation: $\textbf{FIGURA 3.}$ Distribución de Poisson ideal con respecto a la generada por los datos. Finalmente observemos como la distribución de Poisson tiende a la forma de una distribución normal. End of explanation """
TomTranter/OpenPNM
examples/simulations/Fickian Diffusion.ipynb
mit
import numpy as np import openpnm as op %matplotlib inline np.random.seed(10) ws = op.Workspace() ws.settings["loglevel"] = 40 np.set_printoptions(precision=5) """ Explanation: Fickian Diffusion One of the main applications of OpenPNM is simulating transport phenomena such as Fickian diffusion, advection diffusion, reactive transport, etc. In this example, we will learn how to perform Fickian diffusion on a Cubic network. The algorithm works fine with every other network type, but for now we want to keep it simple. End of explanation """ net = op.network.Cubic(shape=[1, 10, 10], spacing=1e-5) """ Explanation: Generating network First, we need to generate a Cubic network. For now, we stick to a 2d network, but you might as well try it in 3d! End of explanation """ geom = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts) """ Explanation: Adding geometry Next, we need to add a geometry to the generated network. A geometry contains information about size of the pores/throats in a network. OpenPNM has tons of prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. For now, we stick to a sample geometry called StickAndBall that assigns random values to pore/throat diameters. End of explanation """ air = op.phases.Air(network=net) """ Explanation: Adding phase Next, we need to add a phase to our simulation. A phase object(s) contain(s) thermophysical information about the working fluid(s) in the simulation. OpenPNM has tons of prebuilt phases as well! For this simulation, we use air as our working fluid. End of explanation """ phys_air = op.physics.Standard(network=net, phase=air, geometry=geom) """ Explanation: Adding physics Finally, we need to add a physics. A physics object contains information about the working fluid in the simulation that depend on the geometry of the network. A good example is diffusive conductance, which not only depends on the thermophysical properties of the working fluid, but also depends on the geometry of pores/throats. End of explanation """ fd = op.algorithms.FickianDiffusion(network=net, phase=air) """ Explanation: Performing Fickian diffusion Now that everything's set up, it's time to perform our Fickian diffusion simulation. For this purpose, we need to add the FickianDiffusion algorithm to our simulation. Here's how we do it: End of explanation """ inlet = net.pores('left') outlet = net.pores('right') fd.set_value_BC(pores=inlet, values=1.0) fd.set_value_BC(pores=outlet, values=0.0) """ Explanation: Note that network and phase are required parameters for pretty much every algorithm we add, since we need to specify on which network and for which phase do we want to run the algorithm. Adding boundary conditions Next, we need to add some boundary conditions to the simulation. By default, OpenPNM assumes zero flux for the boundary pores. End of explanation """ fd.run() """ Explanation: set_value_BC applies the so-called "Dirichlet" boundary condition to the specified pores. Note that unless you want to apply a single value to all of the specified pores (like we just did), you must pass a list (or ndarray) as the values parameter. Running the algorithm Now, it's time to run the algorithm. This is done by calling the run method attached to the algorithm object. End of explanation """ print(fd.settings) """ Explanation: Post processing When an algorithm is successfully run, the results are attached to the same object. To access the results, you need to know the quantity for which the algorithm was solving. For instance, FickianDiffusion solves for the quantity pore.concentration, which is somewhat intuitive. However, if you ever forget it, or wanted to manually check the quantity, you can take a look at the algorithm settings: End of explanation """ c = fd['pore.concentration'] print(c) """ Explanation: Now that we know the quantity for which FickianDiffusion was solved, let's take a look at the results: End of explanation """ print('Network shape:', net._shape) c2d = c.reshape((net._shape)) #NBVAL_IGNORE_OUTPUT import matplotlib.pyplot as plt plt.imshow(c2d[0,:,:]); plt.title('Concentration (mol/m$^3$)') plt.colorbar() """ Explanation: Heatmap Let's visualize the results. Since the network is 2d, we can simply reshape the results in form of a 2d array similar to the shape of the network and plot the heatmap of it using matplotlib. End of explanation """ rate_inlet = fd.rate(pores=inlet)[0] print(f'Mass flow rate from inlet: {rate_inlet:.5e} mol/s') """ Explanation: Calculating heat flux You might as well be interested in calculating the mass flux from a boundary! This is easily done in OpenPNM via calling the rate method attached to the algorithm. Let's see how it works: End of explanation """
Bio204-class/bio204-notebooks
inclass-2016-02-22-Confidence-Intervals.ipynb
cc0-1.0
%matplotlib inline import numpy as np import scipy.stats as stats import pandas as pd import matplotlib.pyplot as plt import matplotlib matplotlib.style.use("bmh") np.random.seed(20160222) # setting seed insures reproducability mu, sigma = 10, 2 popn = stats.norm(loc=mu, scale=sigma) ssizes = [25, 50, 100, 200, 400] samples = [popn.rvs(size=(sz,100)) for sz in ssizes] means = [np.mean(sample, axis=0) for sample in samples] se = [np.std(mean) for mean in means] """ Explanation: Standard Error of the Mean Revisted Let's return to a topic we first discussed up in our introduction to simulation -- the standard error of the mean. Here was the scenario we explored: You want to learn about a variable $X$ in a population of interest. Assume $X \sim N(\mu,\sigma)$. You take a random sample of size $n$ from the population and estimate the sample mean $\overline{x}$ You repeat step 3 a large number of times, calculating a new sample mean each time. We call the distribution of sample means the sampling distribution of the mean (note that you can also estimate the sampling distribution for any statistic of interest). You examine the spread of your sample means. You will find that the sampling distribution of the mean is approximately normally distributed with mean $\sim\mu$, and with a standard deviation $\sim\frac{\sigma}{\sqrt{n}}$. $$ \overline{x} \sim N \left( \mu, \frac{\sigma}{\sqrt{n}}\ \right) $$ We refer to the standard deviation of a sampling distibution of a statistic as the standard error of that statistic. When the statistic of interest is the mean, this is the standard error of the mean (standard errors of the mean are often just referred to as "standard errors" as this is the most common standard error one usually calculates) End of explanation """ # make a pair of plots ssmin, ssmax = min(ssizes), max(ssizes) theoryss = np.linspace(ssmin, ssmax, 250) fig, (ax1, ax2) = plt.subplots(1,2) # 1 x 2 grid of plots fig.set_size_inches(12,4) # plot histograms of sampling distributions for (ss,mean) in zip(ssizes, means): ax1.hist(mean, normed=True, histtype='stepfilled', alpha=0.75, label="n = %d" % ss) ax1.set_xlabel("X") ax1.set_ylabel("Density") ax1.legend() ax1.set_title("Sampling Distributions of Mean\nFor Different Sample Sizes") # plot simulation SE of mean vs theory SE of mean ax2.plot(ssizes, se, 'ko', label='simulation') ax2.plot(theoryss, sigma/np.sqrt(theoryss), color='red', label="theory") ax2.set_xlim(0, ssmax*1.1) ax2.set_ylim(0, max(se)*1.1) ax2.set_xlabel("sample size ($n$)") ax2.set_ylabel("SE of mean") ax2.legend() ax2.set_title("Standard Error of Mean\nTheoretical Expectation vs. Simulation") pass """ Explanation: Explanation of code above The code above contains three list comprehensions for very compactly simulating sampling distribution of the man Create a list of sample sizes to simulate (ssizes) For each sample size (sz), generate random 100 samples, and store those samples in a matrix of size sz $\times$ 100 (i.e. each column is a sample) For each matrix created in step 2, calculate column means (= sample means) For each set of sample means in 3, calculate the standard deviation (= standard error) End of explanation """ N = 1000 samples50 = popn.rvs(size=(50, N)) # N samples of size 50 means50 = np.mean(samples50, axis=0) # sample means std50 = np.std(samples50, axis=0, ddof=1) # sample std devs se50 = std50/np.sqrt(50) # sample standard errors frac_overlap_mu = [] zs = np.arange(1,3,step=0.05) for z in zs: lowCI = means50 - z*se50 highCI = means50 + z*se50 overlap_mu = np.logical_and(lowCI <= mu, highCI >= mu) frac = np.count_nonzero(overlap_mu)/N frac_overlap_mu.append(frac) frac_overlap_mu = np.array(frac_overlap_mu) plt.plot(zs, frac_overlap_mu * 100, 'k-', label="simulation") plt.ylim(60, 104) plt.xlim(1, 3) plt.xlabel("z in CI = sample mean ± z × SE") plt.ylabel(u"% of CIs that include popn mean") # plot theoretical expectation stdnorm = stats.norm(loc=0, scale=1) plt.plot(zs, (1 - (2* stdnorm.sf(zs)))*100, 'r-', alpha=0.5, label="theory") plt.legend(loc='lower right') pass """ Explanation: Sample Estimate of the Standard Error of the Mean In real-life life, we don't have access to the sampling distribution of the mean or the true population parameter $\sigma$ from which can calculate the standard error of the mean. However, we can still use our unbiased sample estimator of the standard deviation, $s$, to estimate the standard error of the mean. $$ {SE}_{\overline{x}} = \frac{s}{\sqrt{n}} $$ Conditions for sampling distribution to be nearly normal For the sampling distribution of the mean to be nearly normal with ${SE}_\overline{x}$ accurate, the following conditions should hold: Sample observations are independent Sample size is large ($n \geq 30$ is good rule of thumb) Population distribution is not strongly skewed Confidence Intervals for the Mean We know that given a random sample from a population of interest, the mean of $X$ in our random sample is unlikely to be the true population mean of $X$. However, our simulations have taught us a number of things: As sample size increases, the sample estimate of the mean is more likely to be close to the true mean As sample size increases, the standard deviation of the sampling distribution of the mean (= standard error of the mean) decreases We can use this knowledge to calculate plausible ranges of values for the mean. We call such ranges confidence intervals for the mean (the idea of confidence intervals can apply to other statistics as well). We're going to express our confidence intervals in terms of multiples of the standard error. Let's start by using simulation to explore how often our confidence intervals capture the true mean when we base our confidence intervals on different multiples, $z$, of the SE. $$ {CI}\overline{x} = \overline{x} \pm (z \times {SE}\overline{x}) $$ For the purposes of this simulation, let's consider samples of size 50, drawn from the same population of interest as before (popn above). We're going to generate a large number of such samples, and for each sample we will calculate the CI of the mean using the formula above. We will then ask, "for what fraction of the samples did our CI overlap the true population mean"? This will give us a sense of how well different confidence intervals do in providing a plausible range for the true mean. End of explanation """ ndraw = 100 x = means50[:ndraw] y = range(0,ndraw) plt.errorbar(x, y, xerr=1.96*se50[:ndraw], fmt='o') plt.vlines(mu, 0, ndraw, linestyle='dashed', color='#D55E00', linewidth=3, zorder=5) plt.ylim(-1,101) plt.yticks([]) plt.title("95% CI: mean ± 1.96×SE\nfor 100 samples of size 50") fig = plt.gcf() fig.set_size_inches(4,8) """ Explanation: Interpreting our simulation How should we interpret the results above? We found as we increased the scaling of our confidence intervals (larger $z$), the true mean was within sample confidence intervals a greater proportion of the time. For example, when $z = 1$ we found that the true mean was within our CIs roughly 67% of the time, while at $z = 2$ the true mean was within our confidence intervals approximately 95% of the time. We call $x \pm 2 \times {SE}_\overline{x}$ the approximate 95% confidence interval of the mean (see below for exact values of z). Given such a CI calculated from a random sample we can say we are "95% confident" that we have captured the true mean within the bounds of the CI (subject to the caveats about the sampling distribution above). By this we mean if we took many samples and built a confidence interval from each sample using the equation above, then about 95% of those intervals would contain the actual mean, μ. Note that this is exactly what we did in our simulation! End of explanation """ perc = np.array([.80, .90, .95, .99, .997]) zval = stdnorm.ppf(1 - (1 - perc)/2) # account for the two tails of the sampling distn print("% CI \tz × SE") print("-----\t------") for (i,j) in zip(perc, zval): print("{:5.1f}\t{:6.2f}".format(i*100, j)) # see the string docs (https://docs.python.org/3.4/library/string.html) # for information on how formatting works """ Explanation: Generating a table of CIs and corresponding margins of error The table below gives the percent CI and the corresponding margin of error ($z \times {SE}$) for that confidence interval. End of explanation """
bjshaw/phys202-2015-work
assignments/assignment11/OptimizationEx01.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt """ Explanation: Optimization Exercise 1 Imports End of explanation """ def hat(x,a,b): v = -a*x**2+b*x**4 return v assert hat(0.0, 1.0, 1.0)==0.0 assert hat(0.0, 1.0, 1.0)==0.0 assert hat(1.0, 10.0, 1.0)==-9.0 """ Explanation: Hat potential The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential": $$ V(x) = -a x^2 + b x^4 $$ Write a function hat(x,a,b) that returns the value of this function: End of explanation """ a = 5.0 b = 1.0 v = [] x = np.linspace(-3,3,50) for i in x: v.append(hat(i,5.0,1.0)) plt.figure(figsize=(7,5)) plt.plot(x,v) plt.tick_params(top=False,right=False,direction='out') plt.xlabel('x') plt.ylabel('V(x)') plt.title('V(x) vs. x'); assert True # leave this to grade the plot """ Explanation: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$: End of explanation """ x1=opt.minimize(hat,-1.8,args=(5.0,1.0))['x'] x2=opt.minimize(hat,1.8,args=(5.0,1.0))['x'] print(x1,x2) v = [] x = np.linspace(-3,3,50) for i in x: v.append(hat(i,5.0,1.0)) plt.figure(figsize=(7,5)) plt.plot(x,v) plt.scatter(x1,hat(x1,5.0,1.0),color='r',label='Local Minima') plt.scatter(x2,hat(x2,5.0,1.0),color='r') plt.tick_params(top=False,right=False,direction='out') plt.xlabel('x') plt.ylabel('V(x)') plt.xlim(-3,3) plt.ylim(-10,35) plt.legend() plt.title('V(x) vs. x'); assert True # leave this for grading the plot """ Explanation: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$. Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima. Print the x values of the minima. Plot the function as a blue line. On the same axes, show the minima as red circles. Customize your visualization to make it beatiful and effective. End of explanation """ x_1 = np.sqrt(10/4) x_2 = -np.sqrt(10/4) print(x_1,x_2) """ Explanation: To check your numerical results, find the locations of the minima analytically. Show and describe the steps in your derivation using LaTeX equations. Evaluate the location of the minima using the above parameters. To find the minima of the equation $V(x) = -a x^2 + b x^4$, we first have to find the $x$ values where the slope is $0$. To do this, we first compute the derivative, $V'(x)=-2ax+4bx^3$ Then we set $V'(x)=0$ and solve for $x$ with our parameters $a=5.0$ and $b=1.0$ $\hspace{15 mm}$$0=-10x+4x^3$ $\Rightarrow$ $10=4x^2$ $\Rightarrow$ $x^{2}=\frac{10}{4}$ $\Rightarrow$ $x=\pm \sqrt{\frac{10}{4}}$ Computing $x$: End of explanation """
no-fire/line-follower
line-follower/src/v1/convnet_regression_circle_and_carpet.ipynb
mit
#Create references to important directories we will use over and over import os, sys #import modules import numpy as np from glob import glob from PIL import Image from tqdm import tqdm from scipy.ndimage import zoom from keras.models import Sequential from keras.metrics import categorical_crossentropy, categorical_accuracy from keras.layers.convolutional import * from keras.preprocessing import image from keras.layers.core import Flatten, Dense from keras.optimizers import Adam from keras.layers.normalization import BatchNormalization from matplotlib import pyplot as plt import seaborn as sns %matplotlib inline import bcolz """ Explanation: Line Follower - CompRobo17 This notebook will show the general procedure to use our project data directories and how to do a regression task using convnets Imports and Directories End of explanation """ DATA_HOME_DIR = '/home/nathan/olin/spring2017/line-follower/line-follower/data' %cd $DATA_HOME_DIR path = DATA_HOME_DIR train_path1=path + '/sun_apr_16_office_full_line_1' train_path2=path + '/qea_blob_1' valid_path1=path + '/qea-square_3' """ Explanation: Create paths to data directories End of explanation """ def resize_vectorized4D(data, new_size=(64, 64)): """ A vectorized implementation of 4d image resizing Args: data (4D array): The images you want to resize new_size (tuple): The desired image size Returns: (4D array): The resized images """ fy, fx = np.asarray(new_size, np.float32) / data.shape[1:3] return zoom(data, (1, fy, fx, 1), order=1) # order is the order of spline interpolation def lowerHalfImage(array): """ Returns the lower half rows of an image Args: array (array): the array you want to extract the lower half from Returns: The lower half of the array """ return array[round(array.shape[0]/2):,:,:] def folder_to_numpy(image_directory_full): """ Read sorted pictures (by filename) in a folder to a numpy array. We have hardcoded the extraction of the lower half of the images as that is the relevant data USAGE: data_folder = '/train/test1' X_train = folder_to_numpy(data_folder) Args: data_folder (str): The relative folder from DATA_HOME_DIR Returns: picture_array (np array): The numpy array in tensorflow format """ # change directory print ("Moving to directory: " + image_directory_full) os.chdir(image_directory_full) # read in filenames from directory g = glob('*.png') if len(g) == 0: g = glob('*.jpg') print ("Found {} pictures".format(len(g))) # sort filenames g.sort() # open and convert images to numpy array - then extract the lower half of each image print("Starting pictures to numpy conversion") picture_arrays = np.array([lowerHalfImage(np.array(Image.open(image_path))) for image_path in g]) # reshape to tensorflow format # picture_arrays = picture_arrays.reshape(*picture_arrays.shape, 1) print ("Shape of output: {}".format(picture_arrays.shape)) # return array return picture_arrays return picture_arrays.astype('float32') def flip4DArray(array): """ Produces the mirror images of a 4D image array """ return array[..., ::-1,:] #[:,:,::-1] also works but is 50% slower def concatCmdVelFlip(array): """ Concatentaes and returns Cmd Vel array """ return np.concatenate((array, array*-1)) # multiply by negative 1 for opposite turn def save_array(fname, arr): c=bcolz.carray(arr, rootdir=fname, mode='w') c.flush() def load_array(fname): return bcolz.open(fname)[:] """ Explanation: Helper Functions Throughout the notebook, we will take advantage of helper functions to cleanly process our data. End of explanation """ def get_data(paths): X_return = [] Y_return = [] for path in paths: %cd $path Y_train = np.genfromtxt('cmd_vel.csv', delimiter=',')[:,1] # only use turning angle Y_train = np.concatenate((Y_train, Y_train*-1)) X_train = folder_to_numpy(path + '/raw') X_train = np.concatenate((X_train, flip4DArray(X_train))) X_return.extend(X_train) Y_return.extend(Y_train) return np.array(X_return), np.array(Y_return) X_train, Y_train = get_data([train_path1, train_path2]) X_train.shape X_valid, Y_valid = get_data([valid_path1]) X_valid.shape Y_valid.shape """ Explanation: Data Because we are using a CNN and unordered pictures, we can flip our data and concatenate it on the end of all training and validation data to make sure we don't bias left or right turns. Training Data Extract and store the training data in X_train and Y_train End of explanation """ %cd /tmp for i in range(300): img = Image.fromarray(X_train[286+286+340+i], 'RGB') data = np.asarray(img)[...,[2,1,0]] img = Image.fromarray(data) img.save("temp{}.jpg") image.load_img("temp.jpg") """ Explanation: Visualize the training data, currently using a hacky method to display the numpy matrix as this is being run over a remote server and I can't view new windows End of explanation """ # %cd $valid_path # Y_valid = np.genfromtxt('cmd_vel.csv', delimiter=',')[:,1] # Y_valid = np.concatenate((Y_valid, Y_valid*-1)) # X_valid = folder_to_numpy(valid_path + '/raw') # X_valid = np.concatenate((X_valid, flip4DArray(X_valid))) """ Explanation: Validation Data Follow the same steps for as the training data for the validation data. End of explanation """ X_valid.shape, Y_valid.shape """ Explanation: Test the shape of the arrays: X_valid: (N, 240, 640, 3) Y_valid: (N,) End of explanation """ img_rows, img_cols = (64, 64) print(img_rows) print(img_cols) X_train = resize_vectorized4D(X_train, (img_rows, img_cols)) X_valid = resize_vectorized4D(X_valid, (img_rows, img_cols)) print(X_train.shape) print(X_valid.shape) """ Explanation: Resize Data When we train the network, we don't want to be dealing with (240, 640, 3) images as they are way too big. Instead, we will resize the images to something more managable, like (64, 64, 3) or (128, 128, 3). In terms of network predictive performance, we are not concerned with the change in aspect ratio, but might want to test a (24, 64, 3) images for faster training End of explanation """ %cd /tmp img = Image.fromarray(X_train[np.random.randint(0, X_train.shape[0])], 'RGB') img.save("temp.jpg") image.load_img("temp.jpg") """ Explanation: Visualize newly resized image. End of explanation """ gen = image.ImageDataGenerator( # rescale=1. / 255 # normalize data between 0 and 1 ) """ Explanation: Batches gen allows us to normalize and augment our images. We will just use it to rescale the images. End of explanation """ train_generator = gen.flow(X_train, Y_train)#, batch_size=batch_size, shuffle=True) valid_generator = gen.flow(X_valid, Y_valid)#, batch_size=batch_size, shuffle=True) # get_batches(train_path, batch_size=batch_size, # target_size=in_shape, # gen=gen) # val_batches = get_batches(valid_path, batch_size=batch_size, # target_size=in_shape, # gen=gen) data, category = next(train_generator) print ("Shape of data: {}".format(data[0].shape)) %cd /tmp img = Image.fromarray(data[np.random.randint(0, data.shape[0])].astype('uint8'), 'RGB') img.save("temp.jpg") image.load_img("temp.jpg") """ Explanation: Next, create the train and valid generators, these are shuffle and have a batch size of 32 by default End of explanation """ in_shape = (img_rows, img_cols, 3) """ Explanation: Convnet Constants End of explanation """ def get_model(): model = Sequential([ Convolution2D(32,3,3, border_mode='same', activation='relu', input_shape=in_shape), MaxPooling2D(), Convolution2D(64,3,3, border_mode='same', activation='relu'), MaxPooling2D(), Convolution2D(128,3,3, border_mode='same', activation='relu'), MaxPooling2D(), Flatten(), Dense(2048, activation='relu'), Dense(1024, activation='relu'), Dense(512, activation='relu'), Dense(1) ]) model.compile(loss='mean_absolute_error', optimizer='adam') return model model = get_model() model.summary() """ Explanation: Model Our test model will use a VGG like structure with a few changes. We are removing the final activation function. We will also use either mean_absolute_error or mean_squared_error as our loss function for regression purposes. End of explanation """ history = model.fit_generator(train_generator, samples_per_epoch=train_generator.n, nb_epoch=5, validation_data=valid_generator, nb_val_samples=valid_generator.n, verbose=True) # %cd $DATA_HOME_DIR # model.save_weights('epoche_QEA_carpet_425.h5') # %cd $DATA_HOME_DIR # model.save_weights('epoche_2500.h5') %cd $DATA_HOME_DIR model.load_weights('epoche_QEA_carpet_425.h5') len(model.layers) model.pop() len(model.layers) model.compile(loss='mean_absolute_error', optimizer='adam') model.summary() X_train_features = model.predict(X_train) X_valid_features = model.predict(X_valid) for x,y in zip(Y_valid, X_valid_features): print (x, y[0]) %cd $train_path2 save_array("X_train_features3.b", X_train_features) %cd $valid_path1 save_array("X_train_features3.b", X_valid_features) X_train_features[9] def get_model_lstm(): model = Sequential([ Convolution2D(32,3,3, border_mode='same', activation='relu', input_shape=in_shape), MaxPooling2D(), Convolution2D(64,3,3, border_mode='same', activation='relu'), MaxPooling2D(), Convolution2D(128,3,3, border_mode='same', activation='relu'), MaxPooling2D(), Flatten(), Dense(2048, activation='relu'), Dense(1024, activation='relu'), Dense(512, activation='relu'), Dense(1) ]) model.compile(loss='mean_absolute_error', optimizer='adam') return model X_train.shape """ Explanation: Train End of explanation """ val_plot = np.convolve(history.history['val_loss'], np.repeat(1/10, 10), mode='valid') train_plot = np.convolve(history.history['loss'], np.repeat(1/10, 10), mode='valid') sns.tsplot(val_plot) X_preds = model.predict(X_valid).reshape(X_valid.shape[0],) for i in range(len(X_valid)): print("{:07f} | {:07f}".format(Y_valid[i], X_preds[i])) X_train_preds = model.predict(X_train).reshape(X_train.shape[0],) for i in range(len(X_train_preds)): print("{:07f} | {:07f}".format(Y_train[i], X_train_preds[i])) """ Explanation: Visualize Training End of explanation """ X_preds.shape X_train_preds.shape np.savetxt("X_train_valid.csv", X_preds, fmt='%.18e', delimiter=',', newline='\n') np.savetxt("X_train_preds.csv", X_train_preds, fmt='%.18e', delimiter=',', newline='\n') """ Explanation: Notes * 32 by 32 images are too small resolution for regression * 64 by 64 seemed to work really well * Moving average plot to see val_loss over time is really nice * Can take up to 2000 epochs to reach a nice minimum End of explanation """
perrette/iis
notebooks/examples.ipynb
mit
from scipy.stats import norm, uniform from iis import IIS, Model def mymodel(params): """User-defined model with two parameters Parameters ---------- params : numpy.ndarray 1-D Returns ------- state : float return value (could also be an array) """ return params[0] + params[1]*2 likelihood = norm(loc=1, scale=1) # normal, univariate distribution mean 1, s.d. 1 prior = [norm(loc=0, scale=10), uniform(loc=-10, scale=20)] model = Model(mymodel, likelihood, prior=prior) # define the model """ Explanation: Example of using IIS Define a model to estimate End of explanation """ solver = IIS(model) ensemble = solver.estimate(size=500, maxiter=10) """ Explanation: Estimate its parameters End of explanation """ # Use pandas to check out the quantiles of the final ensemble ensemble.to_dataframe().quantile([0.5, 0.05, 0.95]) # or the iteration history solver.to_panel(quantiles=[0.5, 0.05, 0.95]) """ Explanation: Investigate results The IIS class has two attributes of interests: - ensemble : current ensemble - history : list of previous ensembles And a to_panel method to vizualize the data as a pandas Panel. The Ensemble class has following attributes of interest: - state : 2-D ndarray (samples x state variables) - params : 2-D ndarray (samples x parameters) - model : the model defined above, with target distribution and forward integration functions For convenience, it is possible to extract these field as pandas DataFrame or Panel, combining params and state. See in-line help for methods Ensemble.to_dataframe and IIS.to_panel. This feature requires having pandas installed. Two plotting methods are also provided: Ensemble.scatter_matrix and IIS.plot_history. The first is simply a wrapper around pandas' function, but it is so frequently used that it is added as a method. End of explanation """ # Plotting methods %matplotlib inline solver.plot_history(overlay_dists=True) """ Explanation: Check convergence End of explanation """ ensemble.scatter_matrix() # result """ Explanation: Scatter matrix to investigate final distributions and correlations End of explanation """ from pandas.tools.plotting import parallel_coordinates, radviz, andrews_curves import matplotlib.pyplot as plt # create clusters of data categories = [] for i in xrange(ensemble.size): if ensemble.params[i,0]>0: cat = 'p0 > 0' elif ensemble.params[i,0] > -5: cat = 'p0 < 0 and |p0| < 5' else: cat = 'rest' categories.append(cat) # Create a DataFrame with a category name class_column = '_CatName' df = ensemble.to_dataframe(categories=categories, class_column=class_column) plt.figure() parallel_coordinates(df, class_column) plt.title("parallel_coordinates") plt.figure() radviz(df, class_column) plt.title("radviz") """ Explanation: Advanced vizualisation using pandas (classes) Pandas is also shipped with a few methods to investigates clusters in data. The categories key-word has been included to Ensemble.to_dataframe to automatically add a column with appropriate categories. End of explanation """
blackjax-devs/blackjax
examples/LogisticRegression.ipynb
apache-2.0
import jax import jax.numpy as jnp import jax.random as random import matplotlib.pyplot as plt from sklearn.datasets import make_biclusters import blackjax %config InlineBackend.figure_format = "retina" plt.rcParams["axes.spines.right"] = False plt.rcParams["axes.spines.top"] = False plt.rcParams["figure.figsize"] = (12, 8) %load_ext watermark %watermark -d -m -v -p jax,jaxlib,blackjax """ Explanation: Bayesian Logistic Regression In this notebook we demonstrate the use of the random walk Rosenbluth-Metropolis-Hasting algorithm on a simple logistic regression. End of explanation """ num_points = 50 X, rows, cols = make_biclusters( (num_points, 2), 2, noise=0.6, random_state=314, minval=-3, maxval=3 ) y = rows[0] * 1.0 # y[i] = whether point i belongs to cluster 1 colors = ["tab:red" if el else "tab:blue" for el in rows[0]] plt.scatter(*X.T, edgecolors=colors, c="none") plt.xlabel(r"$X_0$") plt.ylabel(r"$X_1$") plt.show() """ Explanation: The data We create two clusters of points using scikit-learn's make_bicluster function. End of explanation """ Phi = jnp.c_[jnp.ones(num_points)[:, None], X] N, M = Phi.shape def sigmoid(z): return jnp.exp(z) / (1 + jnp.exp(z)) def log_sigmoid(z): return z - jnp.log(1 + jnp.exp(z)) def logprob_fn(w, alpha=1.0): """The log-probability density function of the posterior distribution of the model.""" log_an = log_sigmoid(Phi @ w) an = Phi @ w log_likelihood_term = y * log_an + (1 - y) * jnp.log(1 - sigmoid(an)) prior_term = alpha * w @ w / 2 return -prior_term + log_likelihood_term.sum() """ Explanation: The model We use a simple logistic regression model to infer to which cluster each of the points belongs. We note $y$ a binary variable that indicates whether a point belongs to the first cluster : $$ y \sim \operatorname{Bernoulli}(p) $$ The probability $p$ to belong to the first cluster commes from a logistic regression: $$ p = \operatorname{logistic}(\Phi\,\boldsymbol{w}) $$ where $w$ is a vector of weights whose priors are a normal prior centered on 0: $$ \boldsymbol{w} \sim \operatorname{Normal}(0, \sigma) $$ And $\Phi$ is the matrix that contains the data, so each row $\Phi_{i,:}$ is the vector $\left[1, X_0^i, X_1^i\right]$ End of explanation """ rng_key = random.PRNGKey(314) w0 = random.multivariate_normal(rng_key, 0.1 + jnp.zeros(M), jnp.eye(M)) rmh = blackjax.rmh(logprob_fn, sigma=jnp.ones(M) * 0.7) initial_state = rmh.init(w0) """ Explanation: Posterior sampling We use blackjax's Random Walk RMH kernel to sample from the posterior distribution. End of explanation """ def inference_loop(rng_key, kernel, initial_state, num_samples): @jax.jit def one_step(state, rng_key): state, _ = kernel(rng_key, state) return state, state keys = jax.random.split(rng_key, num_samples) _, states = jax.lax.scan(one_step, initial_state, keys) return states """ Explanation: Since blackjax does not provide an inference loop we need to implement one ourselves: End of explanation """ _, rng_key = random.split(rng_key) states = inference_loop(rng_key, rmh.step, initial_state, 5_000) """ Explanation: We can now run the inference: End of explanation """ burnin = 300 fig, ax = plt.subplots(1, 3, figsize=(12, 2)) for i, axi in enumerate(ax): axi.plot(states.position[:, i]) axi.set_title(f"$w_{i}$") axi.axvline(x=burnin, c="tab:red") plt.show() chains = states.position[burnin:, :] nsamp, _ = chains.shape """ Explanation: And display the trace: End of explanation """ # Create a meshgrid xmin, ymin = X.min(axis=0) - 0.1 xmax, ymax = X.max(axis=0) + 0.1 step = 0.1 Xspace = jnp.mgrid[xmin:xmax:step, ymin:ymax:step] _, nx, ny = Xspace.shape # Compute the average probability to belong to the first cluster at each point on the meshgrid Phispace = jnp.concatenate([jnp.ones((1, nx, ny)), Xspace]) Z_mcmc = sigmoid(jnp.einsum("mij,sm->sij", Phispace, chains)) Z_mcmc = Z_mcmc.mean(axis=0) plt.contourf(*Xspace, Z_mcmc) plt.scatter(*X.T, c=colors) plt.xlabel(r"$X_0$") plt.ylabel(r"$X_1$") plt.show() """ Explanation: Predictive distribution Having infered the posterior distribution of the regression's coefficients we can compute the probability to belong to the first cluster at each position $(X_0, X_1)$. End of explanation """
pbutenee/ml-tutorial
source/1/recommendation_engine.ipynb
mit
import numpy as np import pandas as pd import sklearn.metrics.pairwise """ Explanation: Recommendation Engine In this tutorial we are going to build a simple recommender system using collaborative filtering. You'll be learning about the popular data analysis package pandas along the way. 1. The import statements End of explanation """ data = pd.read_csv('data/lastfm-matrix-germany.csv').set_index('user') data.head() data.shape """ Explanation: 2. The data We will use Germany's data of the Last.fm Dataset. To read and explore the data we will use the pandas library: + pandas.read_csv: reads a csv file and returns a pandas.DataFrame, a two-dimensional data structure with labelled rows and columns. + pandas.DataFrame.set_index: sets the DataFrame index (the row labels). Pandas enables the use of method chaining: read_csv call returns a DataFrame, on which we can immediatly apply the set_index method by chaining it via dot notation. End of explanation """ ### BEGIN SOLUTION similarity_matrix = sklearn.metrics.pairwise.cosine_similarity(np.transpose(data)) ### END SOLUTION # similarity_matrix = sklearn.metrics.pairwise.cosine_similarity( ? ) assert similarity_matrix.shape == (285, 285) print(similarity_matrix.ndim) """ Explanation: The resulting DataFrame contains a row for each user and each column represents an artist. The values indicate whether the user listend to a song by that artist (1) or not (0). Note that the number of times a person listened to a specific artist is not listed. 3. Determining artist similarity We want to figure out which artist to recommend to which user. Since we know which user listened to which artists we can look for artists or users that are similar. Humans can have vastly complex listening preferences and are very hard to group. Artists on the other hand are usually much easier to group. So it is best to look for similarities between artists rather than between users. To determine if two artists are similar, you can use many different similarity metrics. Finding the best metric is a whole research topic on its own. In many cases though, the cosine similarity is used. The implementation we will use here is the sklearn.metrics.pairwise.cosine_similarity. This function will create a matrix of similarity scores between elements in the first dimension of the input. In our dataset the first dimension holds the different users and the second the different artists. You can switch these dimensions with np.transpose(). End of explanation """ similarity_matrix[:5, :5] """ Explanation: The cosine_similarity function returned a 2-dimensional numpy array. This array contains all the similarity values we need, but it is not labelled. Since the entire array will not fit the screen, we will use slicing to print a subset of the result. End of explanation """ ### BEGIN SOLUTION artist_similarities = pd.DataFrame(similarity_matrix, index=data.columns, columns=data.columns) ### END SOLUTION # artist_similarities = pd.DataFrame( ? , index=data.columns, columns= ? ) assert np.array_equal(artist_similarities.columns, data.columns) assert artist_similarities.shape == similarity_matrix.shape artist_similarities.iloc[:5, :5] """ Explanation: The artist names are both the row and column labels for the similarity_matrix. We can add these labels by creating a new DataFrame based on the numpy array. By using the pandas.DataFrame.iloc integer-location based indexer, we get the same slice as above, but with added labels. End of explanation """ slice_artists = ['ac/dc', 'madonna', 'metallica', 'rihanna', 'the white stripes'] artist_similarities.loc[slice_artists, slice_artists] """ Explanation: Pandas also provides a label based indexer, pandas.DataFrame.loc, which we can use to get a slice based on label values. End of explanation """ similarities = ( # start from untidy DataFrame artist_similarities # add a name to the index .rename_axis(index='artist') # artist needs to be a column for melt .reset_index() # create the tidy dataset .melt(id_vars='artist', var_name='compared_with', value_name='cosine_similarity') # artist compared with itself not needed, keep rows where artist and compared_with are not equal. .query('artist != compared_with') # set identifying observations to index .set_index(['artist', 'compared_with']) # sort the index .sort_index() ) """ Explanation: As you can see above, bands are 100% similar to themselves and The White Stripes are nothing like Abba. We can further increase the usability of this data by making it a tidy dataset. This means we'll put each variable in a column, and each observation in a row. There's three variables in our dataset: + first artist + second artist + cosine similarity In our current DataFrame the second artist is determined by the column labels, and as consequence the cosine similarity observation is spread over multiple columns. The pandas.DataFrame.melt method will fix this. We make extensive use of method chaining for this reshaping of the DataFrame. If you want to know the effect of the different methods, you can comment / uncomment them and check the influence on the result. End of explanation """ similarities.head() """ Explanation: To view the first n rows, we can use the pandas.DataFrame.head method, the default value for n is 5. End of explanation """ similarities.index """ Explanation: Note that we created a MultiIndex by specifying two columns in the set_index call. End of explanation """ similarities.loc['the beatles', :].tail() """ Explanation: The use of the MultiIndex enables flexible access to the data. If we index with a single artist name, we get all compared artists. To view the last n rows for this result, we can use the pandas.DataFrame.tail method. End of explanation """ similarities.loc[('abba', 'madonna'), :] print(slice_artists) similarities.loc[('abba', slice_artists), :] """ Explanation: We can index on multiple levels by providing a tuple of indexes: End of explanation """ artist = 'a perfect circle' n_artists = 10 ### BEGIN SOLUTION top_n = similarities.loc[artist, :].sort_values('cosine_similarity').tail(n_artists) ### END SOLUTION # top_n = similarities.loc[?, :].sort_values('cosine_similarity') ? print(top_n) assert len(top_n) == 10 assert type(top_n) == pd.DataFrame """ Explanation: 4. Picking the best matches Even though many of the artists above have a similarity close to 0, there might be some artists that seem to be slightly similar because somebody with a complex taste listened to them both. To remove this noise from the dataset we are going to limit the number of matches. Let's first try this with the first artist in the list: a perfect circle. End of explanation """ def most_similar_artists(artist, n_artists=10): """Get the most similar artists for a given artist. Parameters ---------- artist: str The artist for which to get similar artists n_artists: int, optional The number of similar artists to return Returns ------- pandas.DataFrame A DataFrame with the similar artists and their cosine_similarity to the given artist """ ### BEGIN SOLUTION return similarities.loc[artist, :].sort_values('cosine_similarity').tail(n_artists) ### END SOLUTION # return similarities.loc[ ? ].sort_values( ? ) ? print(most_similar_artists('a perfect circle')) assert top_n.equals(most_similar_artists('a perfect circle')) assert most_similar_artists('abba', n_artists=15).shape == (15, 1) """ Explanation: We can transform the task of getting the most similar bands for a given band to a function. End of explanation """ help(most_similar_artists) """ Explanation: Note that we also defined a docstring for this function, which we can view by using help() or shift + tab in a jupyter notebook. End of explanation """ user_id = 42 ### BEGIN SOLUTION user_history = data.loc[user_id, :] ### END SOLUTION # user_history = data.loc[ ? , ?] print(user_history) assert user_history.name == user_id assert len(user_history) == 285 """ Explanation: 5. Get the listening history To determine the recommendation score for an artist, we'll want to know whether a user listened to many similar artists. We know which artists are similar to a given artist, but we still need to figure out if any of these similar artists are in the listening history of the user. The listening history of a single user can be acquired by entering the user id with the .loc indexer. End of explanation """ artist = 'the beatles' ### BEGIN SOLUTION similar_labels = most_similar_artists(artist).index ### END SOLUTION # similar_labels = most_similar_artists( ? ). ? print(similar_labels) assert len(similar_labels) == 10 assert type(similar_labels) == pd.Index """ Explanation: We now have the complete listening history, but we only need the history for the similar artists. For this we can use the index labels from the DataFrame returned by the most_similar_artists function. Index labels for a DataFrame can be retrieved by using the pandas.DataFrame.index attribute. End of explanation """ user_id = 42 ### BEGIN SOLUTION similar_history = data.loc[user_id, similar_labels] ### END SOLUTION # similar_history = data.loc[?, ?] assert similar_history.name == user_id print(similar_history) """ Explanation: We can combine the user id and similar labels in the .loc indexer to get the listening history for the most similar artists. End of explanation """ def most_similar_artists_history(artist, user_id): """Get most similar artists and their listening history. Parameters ---------- artist: str The artist for which to get the most similar bands user_id: int The user for which to get the listening history Returns ------- pandas.DataFrame A DataFrame containing the most similar artists for the given artist, with their cosine similarities and their listening history status for the given user. """ ### BEGIN SOLUTION artists = most_similar_artists(artist) history = data.loc[user_id, artists.index].rename('listening_history') ### END SOLUTION # artists = most_similar_artists( ? ) # history = data.loc[ ? , ? ].rename('listening_history') return pd.concat([artists, history], axis=1) example = most_similar_artists_history('abba', 42) assert example.columns.to_list() == ['cosine_similarity', 'listening_history'] example """ Explanation: Let's make a function to get the most similar artists and their listening history for a given artist and user. The function creates two DataFrames with the same index, and then uses pandas.concat to create a single DataFrame from them. End of explanation """ listening_history = np.array([0, 1, 0]) similarity_scores = np.array([0.3, 0.2, 0.1]) recommendation_score = sum(listening_history * similarity_scores) / sum(similarity_scores) print(f'{recommendation_score:.3f}') """ Explanation: 6. Calculate the recommendation score. Now that we have the most_similar_artists_history function, we can start to figure out which artists to advise to whom. We want to quantify how the listening history of a user matches artists similar to an artist they didn't listen to yet. For this purpose we will use the following recommendation score: + We start with the similar artists for a given artist, and their listening history for the user. + Then we sum the cosine similarities of artists the user listened to. + In the end we divide by the total sum of similarities to normalize the score. So when a user listened to 1 of 3 artists that are similar, for example [0, 1, 0] and their respective similarity scores are [0.3, 0.2, 0.1] you get the following recommendation score: End of explanation """ user_id = 42 artist = 'abba' most_similar_artists_history(artist, user_id) """ Explanation: Remember what the DataFrame returned by the most_similar_artists_history function looks like: End of explanation """ most_similar_artists_history(artist, user_id).product(axis=1) """ Explanation: Pandas provides methods to do column or row aggregation, like e.g. pandas.DataFrame.product. This method will calculate all values in a column or row. The direction can be chosen with the axis parameter. As we need the product of the values in the rows (similarity * history), we will need to specify axis=1. End of explanation """ most_similar_artists_history(artist, user_id).product(axis=1).sum() """ Explanation: Then there's pandas.DataFrame.sum which does the same thing for summing the values. As we want the sum for all values in the column we would have to specify axis=0. Since 0 is the default value for the axis parameter we don't have to add it to the method call. End of explanation """ def recommendation_score(artist, user_id): """Calculate recommendation score. Parameters ---------- artist: str The artist for which to calculate the recommendation score. user_id: int The user for which to calculate the recommendation score. Returns: float Recommendation score """ df = most_similar_artists_history(artist, user_id) ### BEGIN SOLUTION return df.product(axis=1).sum() / df.loc[:, 'cosine_similarity'].sum() ### END SOLUTION # return df.?(axis=1).?() / df.loc[:, ? ].sum() assert np.allclose(recommendation_score('abba', 42), 0.08976655361839528) assert np.allclose(recommendation_score('the white stripes', 1), 0.09492796371597861) recommendation_score('abba', 42) """ Explanation: Knowing these methods, it is only a small step to define the scoring function based on the output of most_similar_artists_history. End of explanation """ def unknown_artists(user_id): """Get artists the user hasn't listened to. Parameters ---------- user_id: int User for which to get unknown artists Returns ------- pandas.Index Collection of artists the user hasn't listened to. """ ### BEGIN SOLUTION history = data.loc[user_id, :] return history.loc[history == 0].index ### END SOLUTION # history = data.loc[ ? , :] # return history.loc[ ? == 0].index print(unknown_artists(42)) assert len(unknown_artists(42)) == 278 assert type(unknown_artists(42)) == pd.Index """ Explanation: Determine artists to recommend We only want to recommend artists the user didn't listen to yet, which we'll determine by using the listening history. End of explanation """ def score_unknown_artists(user_id): """Score all unknown artists for a given user. Parameters ---------- user_id: int User for which to get unknown artists Returns ------- list of dict A list of dictionaries. """ ### BEGIN SOLUTION artists = unknown_artists(user_id) return [{'recommendation': artist, 'score': recommendation_score(artist, user_id)} for artist in artists] ### END SOLUTION # artists = unknown_artists( ? ) # return [{'recommendation': artist, 'score': recommendation_score( ? , user_id)} for artist in ?] assert np.allclose(score_unknown_artists(42)[1]['score'], 0.08976655361839528) assert np.allclose(score_unknown_artists(313)[137]['score'], 0.20616395469219984) score_unknown_artists(42)[:5] """ Explanation: The last requirement for our recommender engine is a function that can score all unknown artists for a given user. We will make this function return a list of dictionaries, which can be easily converted to a DataFrame later on. The list will be generated using a list comprehension. End of explanation """ def user_recommendations(user_id, n_rec=5): """Recommend new artists for a user. Parameters ---------- user_id: int User for which to get recommended artists n_rec: int, optional Number of recommendations to make Returns ------- pandas.DataFrame A DataFrame containing artist recommendations for the given user, with their recommendation score. """ scores = score_unknown_artists(user_id) ### BEGIN SOLUTION return ( pd.DataFrame(scores) .sort_values('score', ascending=False) .head(n_rec) .reset_index(drop=True) ) ### END SOLUTION # return ( # pd.DataFrame( ? ) # .sort_values( ? , ascending=False) # . ? (n_rec) # .reset_index(drop=True) # ) assert user_recommendations(313).loc[4, 'recommendation'] == 'jose gonzalez' assert len(user_recommendations(1, n_rec=10)) == 10 user_recommendations(642) """ Explanation: From the scored artists we can easily derive the best recommendations for a given user. End of explanation """ recommendations = [user_recommendations(user).loc[:, 'recommendation'].rename(user) for user in data.index[:10]] """ Explanation: With this final function, it is a small step to get recommendations for multiple users. As our code hasn't been optimized for performance, it is advised to limit the number of users somewhat. End of explanation """ np.transpose(pd.concat(recommendations, axis=1)) g_s = most_similar_artists_history('gorillaz', 642).assign(sim2 = lambda x: x.product(axis=1)) r_1 = g_s.sim2.sum() total = g_s.cosine_similarity.sum() print(total) r_1/total g_s """ Explanation: We can now use the concat function again to get a nice overview of the recommended artists. End of explanation """
atulsingh0/MachineLearning
MasteringML_wSkLearn/06_Clustering_with_K-Means.ipynb
gpl-3.0
# import from sklearn.cluster import KMeans, MiniBatchKMeans from sklearn.linear_model import LogisticRegression from sklearn import metrics from sklearn.utils import shuffle import mahotas as mh from mahotas.features import surf import glob import numpy as np import matplotlib.pyplot as plt from scipy.spatial.distance import cdist %matplotlib inline cluster1 = np.random.uniform(0.5, 1.5, (2, 10)) cluster2 = np.random.uniform(3.5, 4.5, (2, 10)) x = np.array([1, 2, 3, 1, 5, 6, 5, 5, 6, 7, 8, 9, 7, 9]) y = np.array([1, 3, 2, 2, 8, 6, 7, 6, 7, 1, 2, 1, 1, 3]) plt.plot(x,y, 'ob') plt.margins(0.2) X = np.hstack((cluster1, cluster2)).T X = np.vstack((x, y)).T K = range(1, 10) meandistortions = [] for k in K: kmeans = KMeans(n_clusters=k) kmeans.fit(X) meandistortions.append(sum(np.min(cdist(X, kmeans.cluster_centers_, 'euclidean'), axis=1)) / X.shape[0]) plt.plot(K, meandistortions, 'bx-') plt.xlabel('k') plt.ylabel('Average distortion') plt.title('Selecting k with the Elbow Method') plt.show() """ Explanation: 6: Clustering with K-Means The goal of unsupervised learning is to discover hidden structure or patterns in unlabeled training data. Clustering, or cluster analysis, is the task of grouping observations such that members of the same group, or cluster, are more similar to each other by a given metric than they are to the members of the other clusters. Clustering with the K-Means algorithm K-Means is an iterative process of moving the centers of the clusters, or the centroids, to the mean position of their constituent points, and re-assigning instances to their closest clusters. The titular K is a hyperparameter that specifies the number of clusters that should be created; K-Means automatically assigns observations to clusters but cannot determine the appropriate number of clusters. K must be a positive integer that is less than the number of instances in the training set. The parameters of K-Means are the positions of the clusters' centroids and the observations that are assigned to each cluster. Like generalized linear models and decision trees, the optimal values of K-Means' parameters are found by minimizing a cost function. The cost function for K-Means is given by the following equation: $$ J = \sum_{k=1}^{K} \sum_{i \epsilon C_k} || x_i - \mu_k || ^2 $$ In the preceding equation, k μ is the centroid for the cluster k . The cost function sums the distortions of the clusters. Each cluster's distortion is equal to the sum of the squared distances between its centroid and its constituent instances. The distortion is small for compact clusters and large for clusters that contain scattered instances. The parameters that minimize the cost function are learned through an iterative process of assigning observations to clusters and then moving the clusters. In practice, setting the centroids' positions equal to the positions of randomly selected observations yields the best results. During each iteration, K-Means assigns observations to the cluster that they are closest to, and then moves the centroids to their assigned observations' mean location. The elbow method If K is not specified by the problem's context, the optimal number of clusters can be estimated using a technique called the elbow method. The elbow method plots the value of the cost function produced by different values of K. As K increases, the average distortion will decrease; each cluster will have fewer constituent instances, and the instances will be closer to their respective centroids. ..................However, the improvements to the average distortion will decline as K increases. The value of K at which the improvement to the distortion declines the most is called the elbow. End of explanation """ plt.figure(figsize=(12,9)) plt.subplot(3, 2, 1) x1 = np.array([1, 2, 3, 1, 5, 6, 5, 5, 6, 7, 8, 9, 7, 9]) x2 = np.array([1, 3, 2, 2, 8, 6, 7, 6, 7, 1, 2, 1, 1, 3]) X = np.array(list(zip(x1, x2))).reshape(len(x1), 2) #print(list(zip(x1, x2))) plt.xlim([0, 10]) plt.ylim([0, 10]) plt.title('Instances') plt.scatter(x1, x2) colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k', 'b'] markers = ['o', 's', 'D', 'v', '^', 'p', '*', '+'] tests = [2, 3, 4, 5, 8] subplot_counter = 1 for t in tests: subplot_counter += 1 plt.subplot(3, 2, subplot_counter) kmeans_model = KMeans(n_clusters=t).fit(X) for i, l in enumerate(kmeans_model.labels_): plt.plot(x1[i], x2[i], color=colors[l], marker=markers[l], ls='None') plt.xlim([0, 10]) plt.ylim([0, 10]) plt.title('K = %s, silhouette coefficient = %.03f' % ( t, metrics.silhouette_score(X, kmeans_model.labels_, metric='euclidean'))) plt.show() """ Explanation: Evaluating clusters The silhouette coefficient is a measure of the compactness and separation of the clusters. It increases as the quality of the clusters increase; it is large for compact clusters that are far from each other and small for large, overlapping clusters. The silhouette coefficient is calculated per instance; for a set of instances, it is calculated as the mean of the individual samples' scores. The silhouette coefficient for an instance is calculated with the following equation: $$ s = \frac{ba}{max(a,b)} $$ a is the mean distance between the instances in the cluster. b is the mean distance between the instance and the instances in the next closest cluster. End of explanation """ original_img = np.array(mh.imread('data/atul.jpg'), dtype=np.float64) / 255 original_dimensions = tuple(original_img.shape) width, height, depth = tuple(original_img.shape) image_flattened = np.reshape(original_img, (width * height, depth)) image_array_sample = shuffle(image_flattened, random_state=0)[:1000] estimator = KMeans(n_clusters=64, random_state=0) estimator.fit(image_array_sample) cluster_assignments = estimator.predict(image_flattened) compressed_palette = estimator.cluster_centers_ compressed_img = np.zeros((width, height, compressed_palette.shape[1])) label_idx = 0 for i in range(width): for j in range(height): compressed_img[i][j] = compressed_palette[cluster_assignments[label_idx]] label_idx += 1 plt.subplot(122) plt.title('Original Image') plt.imshow(original_img) plt.axis('off') plt.subplot(121) plt.title('Compressed Image') plt.imshow(compressed_img) plt.axis('off') plt.show() """ Explanation: Image Quantization End of explanation """ all_instance_filenames = [] all_instance_targets = [] for f in glob.glob('C:/Users/atul.singh/Downloads/cat_dog_test/test1/*.jpg'): target = 1 if 'cat' in f else 0 all_instance_filenames.append(f) all_instance_targets.append(target) surf_features = [] counter = 0 for f in all_instance_filenames: #print('Reading image:', f) image = mh.imread(f, as_grey=True) surf_features.append(surf.surf(image)[:, 5:]) train_len = int(len(all_instance_filenames) * .60) X_train_surf_features = np.concatenate(surf_features[:train_len]) X_test_surf_feautres = np.concatenate(surf_features[train_len:]) y_train = all_instance_targets[:train_len] y_test = all_instance_targets[train_len:] n_clusters = 300 print ('Clustering', len(X_train_surf_features), 'features') estimator = MiniBatchKMeans(n_clusters=n_clusters) estimator.fit_transform(X_train_surf_features) X_train = [] for instance in surf_features[:train_len]: clusters = estimator.predict(instance) features = np.bincount(clusters) if len(features) < n_clusters: features = np.append(features, np.zeros((1, n_clusterslen(features)))) X_train.append(features) X_test = [] for instance in surf_features[train_len:]: clusters = estimator.predict(instance) features = np.bincount(clusters) if len(features) < n_clusters: features = np.append(features, np.zeros((1, n_clusterslen(features)))) X_test.append(features) clf = LogisticRegression(C=0.001, penalty='l2') clf.fit_transform(X_train, y_train) predictions = clf.predict(X_test) print(classification_report(y_test, predictions)) print('Precision: ', precision_score(y_test, predictions)) print('Recall: ', recall_score(y_test, predictions)) print('Accuracy: ', accuracy_score(y_test, predictions)) """ Explanation: Clustering to learn features End of explanation """
jbn/vaquero
demo/Module_Demo.ipynb
mit
data = [{'user_name': "Jack", 'user_age': "42.0"}, {'user_name': "Jill", 'user_age': 64}, {'user_name': "Jane", 'user_age': "lamp"}] """ Explanation: This notebook demonstrates vaquero. Let's say you are processing some html files for users. Someone on your team already used css selectors to extract a dict of attributes that looks like: End of explanation """ !cat username_pipeline.py from vaquero import ModulePipeline, Vaquero import username_pipeline """ Explanation: You create a pipeline in a file named username_pipeline.py with contents: End of explanation """ vaq = Vaquero() pipeline = ModulePipeline(username_pipeline) vaq.register_targets(pipeline) """ Explanation: After importing necessities, you: - create a vaquero object which gathers the results of your pipeline's applications - create a module pipeline, which wraps and parses the python module - register the targets in the pipeline, so vaquero knows what to observe. ​ End of explanation """ vaq.reset() clean = [] for doc in data: with vaq: # Capture exceptions. d = {} pipeline(doc, d) clean.append(d) vaq.stats() """ Explanation: Now, you can run your pipeline over the data, piece by piece. I usually reset the vaq object at the top of the processing cell. This way, I'm not accidentally looking at stale errors, which happens a lot. End of explanation """ vaq.examine('_robust_int') """ Explanation: The stats show one error. You can examine the entire set of errors for some offending function with: End of explanation """ vaq.examine('_robust_int', '[*].exc_value') """ Explanation: But, more often than not, the exception values are sufficient: End of explanation """ pipeline.reload() """ Explanation: Perhaps, you see a bug in your code. Fix it in the pipeline python file, then do End of explanation """ clean """ Explanation: And try again. (Edit username_pipline) In the end, you have clean data, and a semi-decent code base. End of explanation """
GoogleCloudPlatform/dfcx-scrapi
examples/template.ipynb
apache-2.0
# Copyright 2021 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: NOTEBOOK TEMPLATE (Remove This Cell Before Publishing) This .ipynb file can be used as a template to create new example notebooks to share with users. Guidelines Some strict guidelines to follow for creating and publishing a notebook in this repo: 1. All notebooks must include the Apache License. 2. Notebooks should not contain any PII, PHI or other sensitive information either in the form of variables or output. - As a best practice (where possible) write sample output structure in a markdown cell with fake data rather than using real data PRs that do not adhere to these guidelines will be rejected. Best Practices A few best practices to consider when creating and publishing a notebook in this repo: 1. Notebooks should strive to have clear, concise instructions. 2. Notebooks should run 'as-is' without the need for the user to write additional code. 3. Be descriptive with your Section titles. - Avoid generic titles like Main or Run - Instead, describe what your Section is about to do like Extract Intents and Training Phrases In each section below we've included sample instructions and minimal code as a pointer for the template. If you need further motivation, see this sample notebook End of explanation """ !pip install dfcx-scrapi """ Explanation: Introduction In this notebook, we will show you how to \<INSERT TOPIC OF NOTEBOOK HERE>. Example: In this notebook, we will show you how to extract all Intents and Training Phrases from a Dialogflow CX Agent into a Pandas DataFrame. Prerequisites Prereq #1 Goes Here Prereq #2 Goes Here - Document Link End of explanation """ from dfcx_scrapi.core.intents import Intents """ Explanation: Imports \<INSERT INFORMATION ABOUT NON-STANDARD IMPORTS HERE> Example: We're importing the tqdm library to build a progress bar for our long running function in this notebook. End of explanation """ creds_path = '<YOUR_CREDS_PATH_HERE>' agent_id = '<YOUR_AGENT_ID_HERE>' intent_subset = ['confirmation.yes','confirmation.no'] """ Explanation: User Inputs In the next section, we will collect runtime variables needed to execute this notebook. This should be the only cell of the notebook you need to edit in order for this notebook to run. \<INSERT INFORMATION ABOUT WHAT INPUTS YOUR USER WILL NEED TO PROVIDE> Example: For this notebook, we'll need the following information: - creds_path: Your local path to your GCP Service Account Credentials - agent_id: Your Dialogflow CX Agent ID in String format - intent_subset: A list of strings containing the Intent Names to include End of explanation """ intents = Intents(creds_path=creds_path, agent_id=agent_id) if intent_subset: all_intents = intents.bulk_intent_to_df(intent_subset=intent_subset) else: all_intents = intents.bulk_intent_to_df() """ Explanation: Extract Intents and Training Phrases End of explanation """ all_intents.head(10) """ Explanation: View Results Sample End of explanation """
jinntrance/MOOC
coursera/ml-classification/assignments/module-5-decision-tree-assignment-1-blank.ipynb
cc0-1.0
import graphlab graphlab.canvas.set_target('ipynb') """ Explanation: Identifying safe loans with decision trees The LendingClub is a peer-to-peer leading company that directly connects borrowers and potential lenders/investors. In this notebook, you will build a classification model to predict whether or not a loan provided by LendingClub is likely to default. In this notebook you will use data from the LendingClub to predict whether a loan will be paid off in full or the loan will be charged off and possibly go into default. In this assignment you will: Use SFrames to do some feature engineering. Train a decision-tree on the LendingClub dataset. Visualize the tree. Predict whether a loan will default along with prediction probabilities (on a validation set). Train a complex tree model and compare it to simple tree model. Let's get started! Fire up Graphlab Create Make sure you have the latest version of GraphLab Create. If you don't find the decision tree module, then you would need to upgrade GraphLab Create using pip install graphlab-create --upgrade End of explanation """ loans = graphlab.SFrame('lending-club-data.gl/') """ Explanation: Load LendingClub dataset We will be using a dataset from the LendingClub. A parsed and cleaned form of the dataset is availiable here. Make sure you download the dataset before running the following command. End of explanation """ loans.column_names() """ Explanation: Exploring some features Let's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset. End of explanation """ loans['grade'].show() """ Explanation: Here, we see that we have some feature columns that have to do with grade of the loan, annual income, home ownership status, etc. Let's take a look at the distribution of loan grades in the dataset. End of explanation """ loans['home_ownership'].show() """ Explanation: We can see that over half of the loan grades are assigned values B or C. Each loan is assigned one of these grades, along with a more finely discretized feature called sub_grade (feel free to explore that feature column as well!). These values depend on the loan application and credit report, and determine the interest rate of the loan. More information can be found here. Now, let's look at a different feature. End of explanation """ # safe_loans = 1 => safe # safe_loans = -1 => risky loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1) loans = loans.remove_column('bad_loans') """ Explanation: This feature describes whether the loanee is mortaging, renting, or owns a home. We can see that a small percentage of the loanees own a home. Exploring the target column The target column (label column) of the dataset that we are interested in is called bad_loans. In this column 1 means a risky (bad) loan 0 means a safe loan. In order to make this more intuitive and consistent with the lectures, we reassign the target to be: * +1 as a safe loan, * -1 as a risky (bad) loan. We put this in a new column called safe_loans. End of explanation """ loans['safe_loans'].show(view = 'Categorical') """ Explanation: Now, let us explore the distribution of the column safe_loans. This gives us a sense of how many safe and risky loans are present in the dataset. End of explanation """ features = ['grade', # grade of the loan 'sub_grade', # sub-grade of the loan 'short_emp', # one year or less of employment 'emp_length_num', # number of years of employment 'home_ownership', # home_ownership status: own, mortgage or rent 'dti', # debt to income ratio 'purpose', # the purpose of the loan 'term', # the term of the loan 'last_delinq_none', # has borrower had a delinquincy 'last_major_derog_none', # has borrower had 90 day or worse rating 'revol_util', # percent of available credit being used 'total_rec_late_fee', # total late fees received to day ] target = 'safe_loans' # prediction target (y) (+1 means safe, -1 is risky) # Extract the feature columns and target column loans = loans[features + [target]] """ Explanation: You should have: * Around 81% safe loans * Around 19% risky loans It looks like most of these loans are safe loans (thankfully). But this does make our problem of identifying risky loans challenging. Features for the classification algorithm In this assignment, we will be using a subset of features (categorical and numeric). The features we will be using are described in the code comments below. If you are a finance geek, the LendingClub website has a lot more details about these features. End of explanation """ safe_loans_raw = loans[loans[target] == +1] risky_loans_raw = loans[loans[target] == -1] print "Number of safe loans : %s" % len(safe_loans_raw) print "Number of risky loans : %s" % len(risky_loans_raw) """ Explanation: What remains now is a subset of features and the target that we will use for the rest of this notebook. Sample data to balance classes As we explored above, our data is disproportionally full of safe loans. Let's create two datasets: one with just the safe loans (safe_loans_raw) and one with just the risky loans (risky_loans_raw). End of explanation """ print "Percentage of safe loans :%s" % (len(safe_loans_raw) * 1.0 /(len(safe_loans_raw) + len(risky_loans_raw))), print "Percentage of risky loans :%s" % (len(risky_loans_raw) * 1.0 /(len(safe_loans_raw) + len(risky_loans_raw))), """ Explanation: Now, write some code to compute below the percentage of safe and risky loans in the dataset and validate these numbers against what was given using .show earlier in the assignment: End of explanation """ # Since there are fewer risky loans than safe loans, find the ratio of the sizes # and use that percentage to undersample the safe loans. percentage = len(risky_loans_raw)/float(len(safe_loans_raw)) risky_loans = risky_loans_raw safe_loans = safe_loans_raw.sample(percentage, seed=1) # Append the risky_loans with the downsampled version of safe_loans loans_data = risky_loans.append(safe_loans) """ Explanation: One way to combat class imbalance is to undersample the larger class until the class distribution is approximately half and half. Here, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used seed=1 so everyone gets the same results. End of explanation """ print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data)) print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data)) print "Total number of loans in our new dataset :", len(loans_data) """ Explanation: Now, let's verify that the resulting percentage of safe and risky loans are each nearly 50%. End of explanation """ train_data, validation_data = loans_data.random_split(.8, seed=1) """ Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods. Split data into training and validation sets We split the data into training and validation sets using an 80/20 split and specifying seed=1 so everyone gets the same results. Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters (this is known as model selection). Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on validation set, while evaluation of the final selected model should always be on test data. Typically, we would also save a portion of the data (a real test set) to test our final model on or use cross-validation on the training set to select our final model. But for the learning purposes of this assignment, we won't do that. End of explanation """ decision_tree_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None, target = target, features = features) """ Explanation: Use decision tree to build a classifier Now, let's use the built-in GraphLab Create decision tree learner to create a loan prediction model on the training data. (In the next assignment, you will implement your own decision tree learning algorithm.) Our feature columns and target column have already been decided above. Use validation_set=None to get the same results as everyone else. End of explanation """ small_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None, target = target, features = features, max_depth = 2) """ Explanation: Visualizing a learned model As noted in the documentation, typically the max depth of the tree is capped at 6. However, such a tree can be hard to visualize graphically. Here, we instead learn a smaller model with max depth of 2 to gain some intuition by visualizing the learned tree. End of explanation """ small_model.show(view="Tree") """ Explanation: In the view that is provided by GraphLab Create, you can see each node, and each split at each node. This visualization is great for considering what happens when this model predicts the target of a new data point. Note: To better understand this visual: * The root node is represented using pink. * Intermediate nodes are in green. * Leaf nodes in blue and orange. End of explanation """ validation_safe_loans = validation_data[validation_data[target] == 1] validation_risky_loans = validation_data[validation_data[target] == -1] sample_validation_data_risky = validation_risky_loans[0:2] sample_validation_data_safe = validation_safe_loans[0:2] sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky) sample_validation_data """ Explanation: Making predictions Let's consider two positive and two negative examples from the validation set and see what the model predicts. We will do the following: * Predict whether or not a loan is safe. * Predict the probability that a loan is safe. End of explanation """ decision_tree_model.predict(sample_validation_data) """ Explanation: Explore label predictions Now, we will use our model to predict whether or not a loan is likely to default. For each row in the sample_validation_data, use the decision_tree_model to predict whether or not the loan is classified as a safe loan. Hint: Be sure to use the .predict() method. End of explanation """ decision_tree_model.predict(sample_validation_data, output_type="probability") """ Explanation: Quiz Question: What percentage of the predictions on sample_validation_data did decision_tree_model get correct? Explore probability predictions For each row in the sample_validation_data, what is the probability (according decision_tree_model) of a loan being classified as safe? Hint: Set output_type='probability' to make probability predictions using decision_tree_model on sample_validation_data: End of explanation """ small_model.predict(sample_validation_data, output_type="probability") """ Explanation: Quiz Question: Which loan has the highest probability of being classified as a safe loan? Checkpoint: Can you verify that for all the predictions with probability &gt;= 0.5, the model predicted the label +1? Tricky predictions! Now, we will explore something pretty interesting. For each row in the sample_validation_data, what is the probability (according to small_model) of a loan being classified as safe? Hint: Set output_type='probability' to make probability predictions using small_model on sample_validation_data: End of explanation """ sample_validation_data[1] """ Explanation: Quiz Question: Notice that the probability preditions are the exact same for the 2nd and 3rd loans. Why would this happen? Visualize the prediction on a tree Note that you should be able to look at the small tree, traverse it yourself, and visualize the prediction being made. Consider the following point in the sample_validation_data End of explanation """ small_model.show(view="Tree") """ Explanation: Let's visualize the small tree here to do the traversing for this data point. End of explanation """ small_model.predict(sample_validation_data[1]) """ Explanation: Note: In the tree visualization above, the values at the leaf nodes are not class predictions but scores (a slightly advanced concept that is out of the scope of this course). You can read more about this here. If the score is $\geq$ 0, the class +1 is predicted. Otherwise, if the score < 0, we predict class -1. Quiz Question: Based on the visualized tree, what prediction would you make for this data point? Now, let's verify your prediction by examining the prediction made using GraphLab Create. Use the .predict function on small_model. End of explanation """ print small_model.evaluate(train_data)['accuracy'] print decision_tree_model.evaluate(train_data)['accuracy'] """ Explanation: Evaluating accuracy of the decision tree model Recall that the accuracy is defined as follows: $$ \mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}} $$ Let us start by evaluating the accuracy of the small_model and decision_tree_model on the training data End of explanation """ print small_model.evaluate(validation_data)['accuracy'] print decision_tree_model.evaluate(validation_data)['accuracy'] """ Explanation: Checkpoint: You should see that the small_model performs worse than the decision_tree_model on the training data. Now, let us evaluate the accuracy of the small_model and decision_tree_model on the entire validation_data, not just the subsample considered above. End of explanation """ big_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None, target = target, features = features, max_depth = 10) """ Explanation: Quiz Question: What is the accuracy of decision_tree_model on the validation set, rounded to the nearest .01? Evaluating accuracy of a complex decision tree model Here, we will train a large decision tree with max_depth=10. This will allow the learned tree to become very deep, and result in a very complex model. Recall that in lecture, we prefer simpler models with similar predictive power. This will be an example of a more complicated model which has similar predictive power, i.e. something we don't want. End of explanation """ print big_model.evaluate(train_data)['accuracy'] print big_model.evaluate(validation_data)['accuracy'] """ Explanation: Now, let us evaluate big_model on the training set and validation set. End of explanation """ predictions = decision_tree_model.predict(validation_data) """ Explanation: Checkpoint: We should see that big_model has even better performance on the training set than decision_tree_model did on the training set. Quiz Question: How does the performance of big_model on the validation set compare to decision_tree_model on the validation set? Is this a sign of overfitting? Quantifying the cost of mistakes Every mistake the model makes costs money. In this section, we will try and quantify the cost of each mistake made by the model. Assume the following: False negatives: Loans that were actually safe but were predicted to be risky. This results in an oppurtunity cost of losing a loan that would have otherwise been accepted. False positives: Loans that were actually risky but were predicted to be safe. These are much more expensive because it results in a risky loan being given. Correct predictions: All correct predictions don't typically incur any cost. Let's write code that can compute the cost of mistakes made by the model. Complete the following 4 steps: 1. First, let us compute the predictions made by the model. 1. Second, compute the number of false positives. 2. Third, compute the number of false negatives. 3. Finally, compute the cost of mistakes made by the model by adding up the costs of true positives and false positives. First, let us make predictions on validation_data using the decision_tree_model: End of explanation """ false_positives = 0 for i in range(0, predictions.size()): if 1 == predictions[i] and -1 == validation_data[i]['safe_loans']: false_positives = false_positives + 1 print false_positives """ Explanation: False positives are predictions where the model predicts +1 but the true label is -1. Complete the following code block for the number of false positives: End of explanation """ false_negatives = 0 for i in range(0, predictions.size()): if -1 == predictions[i] and 1 == validation_data[i]['safe_loans']: false_negatives = false_negatives + 1 print false_negatives """ Explanation: False negatives are predictions where the model predicts -1 but the true label is +1. Complete the following code block for the number of false negatives: End of explanation """ print false_positives*20000 + false_negatives * 10000 """ Explanation: Quiz Question: Let us assume that each mistake costs money: * Assume a cost of \$10,000 per false negative. * Assume a cost of \$20,000 per false positive. What is the total cost of mistakes made by decision_tree_model on validation_data? End of explanation """
arnoldlu/lisa
ipynb/tutorial/02_TestEnvUsage.ipynb
apache-2.0
import logging from conf import LisaLogging LisaLogging.setup() # Execute this cell to enabled devlib debugging statements logging.getLogger('ssh').setLevel(logging.DEBUG) # Other python modules required by this notebook import json import time import os """ Explanation: Tutorial goal This tutorial aims to show how to configure a test environment using the TestEnv module provided by LISA. Configure logging End of explanation """ # Custom scrips must be deployed under $LISA_HOME/tools !tree ../../tools # This is the (not so fancy) script we want to deploy !cat ../../tools/scripts/cpuidle_sampling.sh """ Explanation: Test environment setup Do you have custom scripts to deploy and use on target? End of explanation """ # You can have a look at the devlib supported modules by lising the devlib_modules_folder = 'libs/devlib/devlib/module/' logging.info("Devlib provided modules are found under:") logging.info(" $LISA_HOME/{}".format(devlib_modules_folder)) !cd ../../ ; find {devlib_modules_folder} -name "*.py" | sed 's|libs/devlib/devlib/module/| |' | grep -v __init__ """ Explanation: Which devlib modules you need for your experiments? End of explanation """ # Setup a target configuration conf = { # Define the kind of target platform to use for the experiments "platform" : 'linux', # platform type, valid other options are: # android - access via ADB # linux - access via SSH # host - direct access # Preload settings for a specific target "board" : 'juno', # board type, valid options are: # - juno - JUNO Development Board # - tc2 - TC2 Development Board # Login credentials "host" : "192.168.0.1", "username" : "root", "password" : "", # Custom tools to deploy on target, they must be placed under: # $LISA_HOME/tools/(ARCH|scripts) "tools" : [ "cpuidle_sampling.sh" ], # FTrace configuration "ftrace" : { "events" : [ "cpu_idle", "sched_switch", ], "buffsize" : 10240, }, # Where results are collected "results_dir" : "TestEnvExample", # Devlib module required (or not required) 'modules' : [ "cpufreq", "cgroups" ], #"exclude_modules" : [ "hwmon" ], # Local installation path used for kernel/dtb installation on target # The specified path MUST be accessible from the board, e.g. # - JUNO/TC2: it can be the mount path of the VMESD disk image # - Other board: it can be a TFTP server path used by the board bootloader "tftp" : { "folder" : "/var/lib/tftpboot", "kernel" : "kern.bin", "dtb" : "dtb.bin", }, } from env import TestEnv # Initialize a test environment using the provided configuration te = TestEnv(conf) """ Explanation: Setup you TestEnv confguration End of explanation """ # The complete configuration of the target we have configured print json.dumps(te.conf, indent=4) # Last configured kernel and DTB image print te.kernel print te.dtb # The IP and MAC address of the target print te.ip print te.mac # A full platform descriptor print json.dumps(te.platform, indent=4) # A pre-created folder to host the tests results generated using this # test environment, notice that the suite could add additional information # in this folder, like for example a copy of the target configuration # and other target specific collected information te.res_dir # The working directory on the target te.workdir """ Explanation: Attributes The initialization of the test environment pre-initialize some useful<br> environment variables which are available to write test cases. These are some of the information available via the TestEnv object. End of explanation """ # Calibrate RT-App (if required) and get the most updated calibration value te.calibration() # Generate a JSON file with the complete platform description te.platform_dump(dest_dir='/tmp') # Force a reboot of the target (and wait specified [s] before reconnect) # te.reboot(reboot_time=60, ping_time=15) # Resolve a MAC address into an IP address # te.resolv_host(host='00:02:F7:00:5A:5B') # Copy the specified file into the TFTP server folder defined by configuration te.tftp_deploy('/etc/group') !ls -la /var/lib/tftpboot """ Explanation: Functions Some methods are also exposed to test developers which could be used to easy the creation of tests. These are some of the methods available: End of explanation """ # Run a command on the target te.target.execute("echo -n 'Hello Test Environment'", as_root=False) # Spawn a command in background on the target logging.info("Spawn a task which will run for a while...") process = te.target.kick_off("sleep 10", as_root=True) output = te.target.execute("ps") print '\n'.join(output.splitlines()) """ Explanation: Access to the devlib API A special TestEnv attribute is <b>target</b>, which represents a <b>devlib instance</b>. Using the target attribute we can access to the full set of devlib provided functionalities. Which are summarized in the following sections. Remotes commands execution End of explanation """ my_script = te.target.get_installed("cpuidle_sampling.sh") print my_script output = te.target.execute(my_script, as_root=True) output.splitlines() """ Explanation: Notice that the Shell PID is always the same for all commands we execute.<br> This is due to devlib ensuring to keep a persistent connection with the target device. Running custom scripts End of explanation """ # We can also use "notebook embedded" scripts # my_script = " \ # for I in $(seq 3); do \ # grep '' /sys/devices/system/cpu/cpu*/cpufreq/stats/time_in_stats | \ # sed -e 's|/sys/devices/system/cpu/cpu||' -e 's|/cpufreq/scaling_governor:| |' \ # sleep 1 \ # done \ # " # print te.target.execute(my_script) """ Explanation: Notice that the output is returned as a list of lines. This provides a useful base for post-processing the output of that command. End of explanation """ # Acces to many target specific information print "ABI : ", te.target.abi print "big Core Family : ", te.target.big_core print "LITTLE Core Family : ", te.target.little_core print "CPU's Clusters IDs : ", te.target.core_clusters print "CPUs type : ", te.target.core_names # Access to big.LITTLE specific information print "big CPUs IDs : ", te.target.bl.bigs print "LITTLE CPUs IDs : ", te.target.bl.littles print "big CPUs freqs : {}".format(te.target.bl.get_bigs_frequency()) print "big CPUs governor : {}".format(te.target.bl.get_bigs_governor()) """ Explanation: Access to target specific attributes End of explanation """ # You can use autocompletion to have a look at the supported method for a # specific module te.target.cpufreq #.get_all_governors() # Get goverors available for CPU0 te.target.cpufreq.list_governors(0) # Set the "ondemand" governor te.target.cpufreq.set_governor(0, 'ondemand') # Check governor tunables te.target.cpufreq.get_governor_tunables(0) # Update governor tunables te.target.cpufreq.set_governor_tunables(0, sampling_rate=2000000) te.target.cpufreq.get_governor_tunables(0) """ Explanation: Modules usage example: CPUFreq End of explanation """ logging.info('%14s - Available controllers:', 'CGroup') ssys = target.cgroups.list_subsystems() for (n,h,g,e) in ssys: print '{:10} (hierarchy id: {:d}) has {} cgroups'.format(n, h, g) # Get a reference to the CPUSet controller cpuset = target.cgroups.controller('cpuset') # Get the list of current configured CGroups for that controller cgroups = cpuset.list_all() print 'Existing CGropups:' for cg in cgroups: print " ", cg # Create a LITTLE partition and check which tunables we have cpuset_littles = cpuset.cgroup('/LITTLE') cpuset_littles.get() # Setup CPUs and MEMORY nodes for the LITTLE partition cpuset_littles.set(cpus=te.target.bl.littles, mems=0) # Dump the configuraiton of each controller for cgname in cgroups: cgroup = cpuset.cgroup(cgname) attrs = cgroup.get() cpus = attrs['cpus'] print '{}:{:<15} cpus: {}'.format(cpuset.kind, cgroup.name, cpus) # Methods exists to move tasks in/out and in between groups # cpuset_littles.add_task() """ Explanation: Modules usage example: CGroups End of explanation """ # Reset and sample energy counters te.emeter.reset() # Sleep some time time.sleep(2) # Sample energy consumption since last reset nrg = te.emeter.sample() nrg = json.dumps(te.emeter.sample(), indent=4) print "First read: ", nrg # Sleep some more time time.sleep(2) # Sample again nrg = te.emeter.sample() nrg = json.dumps(te.emeter.sample(), indent=4) print "Second read: ", nrg """ Explanation: Sample energy from the target End of explanation """ # Configure a specific set of events to trace te.ftrace_conf( { "events" : [ "cpu_idle", "cpu_capacity", "cpu_frequency", "sched_switch", ], "buffsize" : 10240 } ) # Start/Stop a FTrace session te.ftrace.start() te.target.execute("uname -a") te.ftrace.stop() # Collect and visualize the trace trace_file = os.path.join(te.res_dir, 'trace.dat') te.ftrace.get_trace(trace_file) output = os.popen("DISPLAY=:0.0 kernelshark {}".format(trace_file)) """ Explanation: Configure FTrace for a sepcific experiment End of explanation """
jserenson/Python_Bootcamp
Lists.ipynb
gpl-3.0
# Assign a list to an variable named my_list my_list = [1,2,3] """ Explanation: Lists Earlier when discussing strings we introduced the concept of a sequence in Python. Lists can be thought of the most general version of a sequence in Python. Unlike strings, they are mutable, meaning the elements inside a list can be changed! In this section we will learn about: 1.) Creating lists 2.) Indexing and Slicing Lists 3.) Basic List Methods 4.) Nesting Lists 5.) Introduction to List Comprehensions Lists are constructed with brackets [] and commas separating every element in the list. Let's go ahead and see how we can construct lists! End of explanation """ my_list = ['A string',23,100.232,'o'] """ Explanation: We just created a list of integers, but lists can actually hold different object types. For example: End of explanation """ len(my_list) """ Explanation: Just like strings, the len() function will tell you how many items are in the sequence of the list. End of explanation """ my_list = ['one','two','three',4,5] # Grab element at index 0 my_list[0] # Grab index 1 and everything past it my_list[1:] # Grab everything UP TO index 3 my_list[:3] """ Explanation: Indexing and Slicing Indexing and slicing works just like in strings. Let's make a new list to remind ourselves of how this works: End of explanation """ my_list + ['new item'] """ Explanation: We can also use + to concatenate lists, just like we did for strings. End of explanation """ my_list """ Explanation: Note: This doesn't actually change the original list! End of explanation """ # Reassign my_list = my_list + ['add new item permanently'] my_list """ Explanation: You would have to reassign the list to make the change permanent. End of explanation """ # Make the list double my_list * 2 # Again doubling not permanent my_list """ Explanation: We can also use the * for a duplication method similar to strings: End of explanation """ # Create a new list l = [1,2,3] """ Explanation: Basic List Methods If you are familiar with another programming language, you might start to draw parallels between arrays in another language and lists in Python. Lists in Python however, tend to be more flexible than arrays in other languages for a two good reasons: they have no fixed size (meaning we don't have to specify how big a list will be), and they have no fixed type constraint (like we've seen above). Let's go ahead and explore some more special methods for lists: End of explanation """ # Append l.append('append me!') # Show l """ Explanation: Use the append method to permanently add an item to the end of a list: End of explanation """ # Pop off the 0 indexed item l.pop(0) # Show l # Assign the popped element, remember default popped index is -1 popped_item = l.pop() popped_item # Show remaining list l """ Explanation: Use pop to "pop off" an item from the list. By default pop takes off the last index, but you can also specify which index to pop off. Let's see an example: End of explanation """ l[100] """ Explanation: It should also be noted that lists indexing will return an error if there is no element at that index. For example: End of explanation """ new_list = ['a','e','x','b','c'] #Show new_list # Use reverse to reverse order (this is permanent!) new_list.reverse() new_list # Use sort to sort the list (in this case alphabetical order, but for numbers it will go ascending) new_list.sort() new_list """ Explanation: We can use the sort method and the reverse methods to also effect your lists: End of explanation """ # Let's make three lists lst_1=[1,2,3] lst_2=[4,5,6] lst_3=[7,8,9] # Make a list of lists to form a matrix matrix = [lst_1,lst_2,lst_3] # Show matrix """ Explanation: Nesting Lists A great feature of of Python data structures is that they support nesting. This means we can have data structures within data structures. For example: A list inside a list. Let's see how this works! End of explanation """ # Grab first item in matrix object matrix[0] # Grab first item of the first item in the matrix object matrix[0][0] """ Explanation: Now we can again use indexing to grab elements, but now there are two levels for the index. The items in the matrix object, and then the items inside that list! End of explanation """ # Build a list comprehension by deconstructing a for loop within a [] first_col = [row[0] for row in matrix] first_col """ Explanation: List Comprehensions Python has an advanced feature called list comprehensions. They allow for quick construction of lists. To fully understand list comprehensions we need to understand for loops. So don't worry if you don't completely understand this section, and feel free to just skip it since we will return to this topic later. But in case you want to know now, here are a few examples! End of explanation """
aam-at/tensorflow
tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ !pip install tflite-model-maker """ Explanation: Text classification with TensorFlow Lite Model Maker <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications. This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories.The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews. Prerequisites Install the required packages To run this example, install the required packages, including the Model Maker package from the GitHub repo. End of explanation """ import numpy as np import os import tensorflow as tf assert tf.__version__.startswith('2') from tflite_model_maker import configs from tflite_model_maker import ExportFormat from tflite_model_maker import model_spec from tflite_model_maker import text_classifier from tflite_model_maker import TextClassifierDataLoader """ Explanation: Import the required packages. End of explanation """ data_dir = tf.keras.utils.get_file( fname='SST-2.zip', origin='https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8', extract=True) data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2') """ Explanation: Get the data path Download the dataset for this tutorial. End of explanation """ spec = model_spec.get('mobilebert_classifier') """ Explanation: You can also upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab. <img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_text_classification.png" alt="Upload File" width="800" hspace="100"> If you prefer not to upload your dataset to the cloud, you can also locally run the library by following the guide. End-to-End Workflow This workflow consists of five steps as outlined below: Step 1. Choose a model specification that represents a text classification model. This tutorial uses MobileBERT as an example. End of explanation """ train_data = TextClassifierDataLoader.from_csv( filename=os.path.join(os.path.join(data_dir, 'train.tsv')), text_column='sentence', label_column='label', model_spec=spec, delimiter='\t', is_training=True) test_data = TextClassifierDataLoader.from_csv( filename=os.path.join(os.path.join(data_dir, 'dev.tsv')), text_column='sentence', label_column='label', model_spec=spec, delimiter='\t', is_training=False) """ Explanation: Step 2. Load train and test data specific to an on-device ML app and preprocess the data according to a specific model_spec. End of explanation """ model = text_classifier.create(train_data, model_spec=spec) """ Explanation: Step 3. Customize the TensorFlow model. End of explanation """ loss, acc = model.evaluate(test_data) """ Explanation: Step 4. Evaluate the model. End of explanation """ config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY]) config._experimental_new_quantizer = True model.export(export_dir='mobilebert/', quantization_config=config) """ Explanation: Step 5. Export as a TensorFlow Lite model with metadata. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress it by almost 4x with minimal performance degradation. End of explanation """ spec = model_spec.get('average_word_vec') """ Explanation: You can also download the model using the left sidebar in Colab. After executing the 5 steps above, you can further use the TensorFlow Lite model file in on-device applications using BertNLClassifier API in TensorFlow Lite Task Library. The following sections walk through the example step by step to show more detail. Choose a model_spec that Represents a Model for Text Classifier Each model_spec object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports MobileBERT, averaging word embeddings and BERT-Base models. Supported Model | Name of model_spec | Model Description --- | --- | --- MobileBERT | 'mobilebert_classifier' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications. BERT-Base | 'bert_classifier' | Standard BERT model that is widely used in NLP tasks. averaging word embedding | 'average_word_vec' | Averaging text word embeddings with RELU activation. This tutorial uses a smaller model, average_word_vec that you can retrain multiple times to demonstrate the process. End of explanation """ data_dir = tf.keras.utils.get_file( fname='SST-2.zip', origin='https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8', extract=True) data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2') """ Explanation: Load Input Data Specific to an On-device ML App The SST-2 (Stanford Sentiment Treebank) is one of the tasks in the GLUE benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for validation. The dataset has two classes: positive and negative movie reviews. Download the archived version of the dataset and extract it. End of explanation """ train_data = TextClassifierDataLoader.from_csv( filename=os.path.join(os.path.join(data_dir, 'train.tsv')), text_column='sentence', label_column='label', model_spec=spec, delimiter='\t', is_training=True) test_data = TextClassifierDataLoader.from_csv( filename=os.path.join(os.path.join(data_dir, 'dev.tsv')), text_column='sentence', label_column='label', model_spec=spec, delimiter='\t', is_training=False) """ Explanation: The SST-2 dataset has train.tsv for training and dev.tsv for validation. The files have the following format: sentence | label --- | --- it 's a charming and often affecting journey . | 1 unflinchingly bleak and desperate | 0 A positive review is labeled 1 and a negative review is labeled 0. Use the TestClassifierDataLoader.from_csv method to load the data. End of explanation """ model = text_classifier.create(train_data, model_spec=spec, epochs=10) """ Explanation: The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders. Customize the TensorFlow Model Create a custom text classifier model based on the loaded data. End of explanation """ model.summary() """ Explanation: Examine the detailed model structure. End of explanation """ loss, acc = model.evaluate(test_data) """ Explanation: Evaluate the Customized Model Evaluate the model with the test data and get its loss and accuracy. End of explanation """ model.export(export_dir='average_word_vec/') """ Explanation: Export as a TensorFlow Lite Model Convert the existing model to TensorFlow Lite model format with metadata that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is model.tflite. End of explanation """ model.export(export_dir='average_word_vec/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB]) """ Explanation: The TensorFlow Lite model file can be used in the text classification reference app using NLClassifier API in TensorFlow Lite Task Library. The allowed export formats can be one or a list of the following: ExportFormat.TFLITE ExportFormat.LABEL ExportFormat.VOCAB ExportFormat.SAVED_MODEL By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the label file and vocab file as follows: End of explanation """ accuracy = model.evaluate_tflite('average_word_vec/model.tflite', test_data) """ Explanation: You can evalute the tflite model with evaluate_tflite method to get its accuracy. End of explanation """ new_model_spec = model_spec.AverageWordVecModelSpec(wordvec_dim=32) """ Explanation: Advanced Usage The create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The AverageWordVecModelSpec and BertClassifierModelSpec classes are currently supported. The create function comprises of the following steps: Creates the model for the text classifier according to model_spec. Trains the classifier model. The default epochs and the default batch size are set by the default_training_epochs and default_batch_size variables in the model_spec object. This section covers advanced usage topics like adjusting the model and the training hyperparameters. Adjust the model You can adjust the model infrastructure like the wordvec_dim and the seq_len variables in the AverageWordVecModelSpec class. For example, you can train the model with a larger value of wordvec_dim. Note that you must construct a new model_spec if you modify the model. End of explanation """ new_train_data = TextClassifierDataLoader.from_csv( filename=os.path.join(os.path.join(data_dir, 'train.tsv')), text_column='sentence', label_column='label', model_spec=new_model_spec, delimiter='\t', is_training=True) """ Explanation: Get the preprocessed data. End of explanation """ model = text_classifier.create(new_train_data, model_spec=new_model_spec) """ Explanation: Train the new model. End of explanation """ new_model_spec = model_spec.get('mobilebert_classifier') new_model_spec.seq_len = 256 """ Explanation: You can also adjust the MobileBERT model. The model parameters you can adjust are: seq_len: Length of the sequence to feed into the model. initializer_range: The standard deviation of the truncated_normal_initializer for initializing all weight matrices. trainable: Boolean that specifies whether the pre-trained layer is trainable. The training pipeline parameters you can adjust are: model_dir: The location of the model checkpoint files. If not set, a temporary directory will be used. dropout_rate: The dropout rate. learning_rate: The initial learning rate for the Adam optimizer. tpu: TPU address to connect to. For instance, you can set the seq_len=256 (default is 128). This allows the model to classify longer text. End of explanation """ model = text_classifier.create(train_data, model_spec=spec, epochs=20) """ Explanation: Tune the training hyperparameters You can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance, epochs: more epochs could achieve better accuracy, but may lead to overfitting. batch_size: the number of samples to use in one training step. For example, you can train with more epochs. End of explanation """ loss, accuracy = model.evaluate(test_data) """ Explanation: Evaluate the newly retrained model with 20 training epochs. End of explanation """ spec = model_spec.get('bert_classifier') """ Explanation: Change the Model Architecture You can change the model by changing the model_spec. The following shows how to change to BERT-Base model. Change the model_spec to BERT-Base model for the text classifier. End of explanation """
skorokithakis/pythess-files
006 - Frank Underwood/schema-presentation/Schema presentation.ipynb
mit
data = { "operation": "upload", # "upload" or "delete" "timeout": 3600, # Optional, how long the sig should be valid for. "md5": "deadbeefetc", # Optional "files": { "5gbCtxlvljhx5-al": { "size": 65536, "shred_date": "2015-05-02T00:00:00Z" # Must be a date from now up to 4 months in the future. }, } } """ Explanation: The inimitable schema library Part 2 By @stavros Structured data is everywhere End of explanation """ if data.get("operation") not in ["upload", "delete"]: raise SomeError("Operation not valid.") try: timeout = int(data.get("timeout")) except ValueError: raise SomeError("Timeout not a number.") if not 0 < timeout <= 3600: raise SomeError("Timeout not up to one hour in the future.") if data.get("md5") and not isinstance(data["md5"], str): raise SomeError("md5 is not a valid MD5 hash.") if not isinstance(data.get("files"), dict): raise SomeError("files must be a dictionary.") # etc """ Explanation: How do we validate it? End of explanation """ from schema import Schema, And, Or, Optional, Use, SchemaError schema = Schema({ "foo": int, Optional("hello"): "hi!", }) data = { "hello": "hi!", "foo": 3, } schema.validate(data) schema = Schema({ "foo": int, Optional("hello"): "hi!", }) data = { "foo": 3, } schema.validate(data) schema = Schema({ "foo": int, Optional("hello"): "hi!", }) data = { "hello": "yo", "foo": 3, } try: schema.validate(data) except SchemaError as e: print e """ Explanation: <div style="text-align: center"><img src="http://i.giphy.com/NsyUZQ6OJDVdu.gif" width="960" /></div> Is there a better way? No. Just kidding, of course there is. Who asks "is there a better way?" if there's no better way? No one, that's who. Presenting the schema library. End of explanation """ schema = Schema(And(int, fetch_user_by_id)) data = "/tmp/pythess" try: print schema.validate(data) except SchemaError as e: print(e) schema = Schema(range(10)) data = [2, 4, 6, 2, 2, 2, 20] try: print schema.validate(data) except SchemaError as e: print(e) schema = Schema({ "shred_date": And( basestring, Use(ciso8601.parse_datetime_unaware), datetime.datetime, Use(lambda d: (d - datetime.datetime.now()).days), lambda d: 0 < d < 120, error="shred_date must be a valid future date string up to 120 days from now.") }) data = { "shred_date": "2016-10-10T00:00:00Z", } try: print schema.validate(data) except SchemaError as e: print(e) operations = {"upload": "PUT", "delete": "DELETE", "replace": "POST"} schema = Schema(And(Use(json.loads, error="Invalid JSON"), { "operation": And(lambda s: s in operations.keys(), Use(operations.get), error="Valid operations are: %s" % ", ".join(operations.keys())), "files": {And(basestring, lambda s: len(s) > 5, error="Filename must be a string longer than 5 characters."): { Optional("size"): And(int, lambda i: i > 0, error="Size must be a positive integer."), Optional("shred_date"): And( basestring, # Make sure it's a string. Use(ciso8601.parse_datetime_unaware), # Parse it into a date. datetime.datetime, # Make sure it's a date now. lambda d: 0 < (d - datetime.datetime.now()).days < 120, # Make sure it's in the future, up to 120 days. error="shred_date must be a valid future date string up to 120 days from now.") }}})) data = """{ "operation": "repklace", "files": { "file.nam": { "size": 100, "shred_date": "2016-01-01T00:00:00Z" }}}}""" try: print schema.validate(data) except SchemaError as e: print e """ Explanation: Tricks End of explanation """
michaelgat/Udacity_DL
intro-to-tflearn/TFLearn_Sentiment_Analysis-MG.ipynb
mit
import pandas as pd import numpy as np import tensorflow as tf import tflearn from tflearn.data_utils import to_categorical from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) """ Explanation: Sentiment analysis with TFLearn In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you. We'll start off by importing all the modules we'll need, then load and prepare the data. End of explanation """ reviews = pd.read_csv('reviews.txt', header=None) labels = pd.read_csv('labels.txt', header=None) """ Explanation: Preparing the data Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this. Read the data Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way. End of explanation """ from collections import Counter total_counts = Counter() for _, row in reviews.iterrows(): total_counts.update(row[0].split(' ')) print("Total words in data set: ", len(total_counts)) """ Explanation: Counting word frequency To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class. Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours. End of explanation """ vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000] print(vocab[:60]) """ Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words. End of explanation """ print(vocab[-1], ': ', total_counts[vocab[-1]]) """ Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words. End of explanation """ word2idx = {word: i for i, word in enumerate(vocab)} """ Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words. Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie. Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension. Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on. End of explanation """ def text_to_vector(text): word_vector = np.zeros(len(vocab), dtype=np.int_) for word in text.split(' '): idx = word2idx.get(word, None) if idx is None: continue else: # Originally += 1, turns out the existence of a word in the review is sufficient, # counting the number of times it appears just introduces lots of noise. word_vector[idx] = 1 return np.array(word_vector) """ Explanation: Text to vector function Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this: Initialize the word vector with np.zeros, it should be the length of the vocabulary. Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here. For each word in that list, increment the element in the index associated with that word, which you get from word2idx. Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary. End of explanation """ text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65] """ Explanation: If you do this right, the following code should return ``` text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65] array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0]) ``` End of explanation """ word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_) for ii, (_, text) in enumerate(reviews.iterrows()): word_vectors[ii] = text_to_vector(text[0]) # Printing out the first 5 word vectors word_vectors[:5, :35] """ Explanation: Now, run through our entire review data set and convert each review to a word vector. End of explanation """ Y = (labels=='positive').astype(np.int_) records = len(labels) shuffle = np.arange(records) np.random.shuffle(shuffle) test_fraction = 0.9 train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):] trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split, 0], 2) testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split, 0], 2) trainY """ Explanation: Train, Validation, Test sets Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later. End of explanation """ # Network building def build_model(): with tf.device("/gpu:0"): # This resets all parameters and variables, leave this here tf.reset_default_graph() net = tflearn.input_data([None, 10000]) net = tflearn.fully_connected(net, 200, activation='ReLU') net = tflearn.fully_connected(net, 25, activation='ReLU') net = tflearn.fully_connected(net, 2, activation='softmax') net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') model = tflearn.DNN(net) return model """ Explanation: Building the network TFLearn lets you build the network by defining the layers. Input layer For the input layer, you just need to tell it how many units you have. For example, net = tflearn.input_data([None, 100]) would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size. The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units). Output layer The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax. net = tflearn.fully_connected(net, 2, activation='softmax') Training To set how you train the network, use net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with the categorical cross-entropy. Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like net = tflearn.input_data([None, 10]) # Input net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden net = tflearn.fully_connected(net, 2, activation='softmax') # Output net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') model = tflearn.DNN(net) Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. End of explanation """ model = build_model() """ Explanation: Intializing the model Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want. Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon. End of explanation """ # Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100) """ Explanation: Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors. You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network. End of explanation """ predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_) test_accuracy = np.mean(predictions == testY[:,0], axis=0) print("Test accuracy: ", test_accuracy) """ Explanation: Testing After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters. End of explanation """ # Helper function that uses your model to predict sentiment def test_sentence(sentence): positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1] print('Sentence: {}'.format(sentence)) print('P(positive) = {:.3f} :'.format(positive_prob), 'Positive' if positive_prob > 0.5 else 'Negative') sentence = "Mediocre film, but I really enjoyed the laughs and comedy, as low-brow as they were." test_sentence(sentence) sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful" test_sentence(sentence) """ Explanation: Try out your own text! End of explanation """
eyadsibai/rep
howto/01-howto-Classifiers.ipynb
apache-2.0
!cd toy_datasets; wget -O MiniBooNE_PID.txt -nc MiniBooNE_PID.txt https://archive.ics.uci.edu/ml/machine-learning-databases/00199/MiniBooNE_PID.txt import numpy, pandas from rep.utils import train_test_split from sklearn.metrics import roc_auc_score data = pandas.read_csv('toy_datasets/MiniBooNE_PID.txt', sep='\s*', skiprows=[0], header=None, engine='python') labels = pandas.read_csv('toy_datasets/MiniBooNE_PID.txt', sep=' ', nrows=1, header=None) labels = [1] * labels[1].values[0] + [0] * labels[2].values[0] data.columns = ['feature_{}'.format(key) for key in data.columns] """ Explanation: About This notebook demonstrates classifiers, which are provided by Reproducible experiment platform (REP) package. <br /> REP contains following classifiers * scikit-learn * TMVA * XGBoost Also classifiers from hep_ml (as any other sklearn-compatible classifiers may be used) In this notebook we show the most simple way to train classifier build predictions measure quality combine metaclassifiers Loading data download particle identification Data Set from UCI End of explanation """ data[:5] """ Explanation: First rows of our data End of explanation """ # Get train and test data train_data, test_data, train_labels, test_labels = train_test_split(data, labels, train_size=0.5) """ Explanation: Splitting into train and test End of explanation """ variables = list(data.columns[:26]) """ Explanation: Classifiers All classifiers inherit from sklearn.BaseEstimator and have the following methods: classifier.fit(X, y, sample_weight=None) - train classifier classifier.predict_proba(X) - return probabilities vector for all classes classifier.predict(X) - return predicted labels classifier.staged_predict_proba(X) - return probabilities after each iteration (not supported by TMVA) classifier.get_feature_importances() Here we use X to denote matrix with data of shape [n_samples, n_features], y is vector with labels (0 or 1) of shape [n_samples], <br /> sample_weight is vector with weights. Difference from default scikit-learn interface X should be* pandas.DataFrame, not numpy.array. <br /> Provided this, you'll be able to choose features used in training by setting e.g. features=['FlightTime', 'p'] in constructor. * it works fine with numpy.array as well, but in this case all the features will be used. Variables used in training End of explanation """ from rep.estimators import SklearnClassifier from sklearn.ensemble import GradientBoostingClassifier # Using gradient boosting with default settings sk = SklearnClassifier(GradientBoostingClassifier(), features=variables) # Training classifier sk.fit(train_data, train_labels) print('training complete') """ Explanation: Sklearn wrapper for scikit-learn classifiers. In this example we use GradientBoosting with default settings End of explanation """ # predict probabilities for each class prob = sk.predict_proba(test_data) print prob print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1]) """ Explanation: Predicting probabilities, measuring the quality End of explanation """ sk.predict(test_data) sk.get_feature_importances() """ Explanation: Predictions of classes End of explanation """ from rep.estimators import TMVAClassifier print TMVAClassifier.__doc__ tmva = TMVAClassifier(method='kBDT', NTrees=50, Shrinkage=0.05, features=variables) tmva.fit(train_data, train_labels) print('training complete') """ Explanation: TMVA End of explanation """ # predict probabilities for each class prob = tmva.predict_proba(test_data) print prob print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1]) # predict labels tmva.predict(test_data) """ Explanation: Predict probabilities and estimate quality End of explanation """ from rep.estimators import XGBoostClassifier print XGBoostClassifier.__doc__ # XGBoost with default parameters xgb = XGBoostClassifier(features=variables) xgb.fit(train_data, train_labels, sample_weight=numpy.ones(len(train_labels))) print('training complete') """ Explanation: XGBoost End of explanation """ prob = xgb.predict_proba(test_data) print 'ROC AUC:', roc_auc_score(test_labels, prob[:, 1]) """ Explanation: Predict probabilities and estimate quality End of explanation """ xgb.predict(test_data) xgb.get_feature_importances() """ Explanation: Predict labels End of explanation """ from sklearn.ensemble import AdaBoostClassifier # Construct AdaBoost with TMVA as base estimator base_tmva = TMVAClassifier(method='kBDT', NTrees=15, Shrinkage=0.05) ada_tmva = SklearnClassifier(AdaBoostClassifier(base_estimator=base_tmva, n_estimators=5), features=variables) ada_tmva.fit(train_data, train_labels) print('training complete') prob = ada_tmva.predict_proba(test_data) print 'AUC', roc_auc_score(test_labels, prob[:, 1]) """ Explanation: Advantages of common interface As one can see above, all the classifiers implement the same interface, this simplifies work, simplifies comparison of different classifiers, but this is not the only profit. Sklearn provides different tools to combine different classifiers and transformers. One of this tools is AdaBoost, which is abstract metaclassifier built on the top of some other classifier (usually, decision dree) Let's show that now you can run AdaBoost over classifiers from other libraries! <br /> (isn't boosting over neural network what you were dreaming of all your life?) AdaBoost over TMVA classifier End of explanation """ # Construct AdaBoost with xgboost base estimator base_xgb = XGBoostClassifier(n_estimators=50) # ada_xgb = SklearnClassifier(AdaBoostClassifier(base_estimator=base_xgb, n_estimators=1), features=variables) ada_xgb = AdaBoostClassifier(base_estimator=base_xgb, n_estimators=1) ada_xgb.fit(train_data[variables], train_labels) print('training complete!') # predict probabilities for each class prob = ada_xgb.predict_proba(test_data[variables]) print 'AUC', roc_auc_score(test_labels, prob[:, 1]) # predict probabilities for each class prob = ada_xgb.predict_proba(train_data[variables]) print 'AUC', roc_auc_score(train_labels, prob[:, 1]) """ Explanation: AdaBoost over XGBoost End of explanation """
Danghor/Algorithms
Python/Chapter-04/Radix-Sort.ipynb
gpl-2.0
%run Counting-Sort.ipynb """ Explanation: Radix Sort As <em style="color:blue">radix sort</em> is based on <em style="color:blue">counting sort</em>, we have to start our implementation of radix sort by defining the function countingSort that we have already discussed previously. The easiest way to do this is by using the %run magic that Juypter notebooks provide. End of explanation """ def extractByte(n, k): return n >> (8 * (k-1)) & 0b1111_1111 n = 123456789 B = [extractByte(n, k) for k in [1, 2, 3, 4]] print(B) assert n == sum([B[k] * 256 ** k for k in [0, 1, 2, 3]]) """ Explanation: The function $\texttt{extractByte}(n, k)$ takes a natural number $n < 2^{32}$ and a number $k\in {1,2,3,4}$ as arguments. It returns the $k$-th byte of $n$. End of explanation """ def radixSort(L): L = [(n, 0) for n in L] for k in range(1, 4+1): L = [(n, extractByte(n, k)) for (n, _) in L] L = countingSort(L) return [n for (n, _) in L] """ Explanation: The function $\texttt{radixSort}(L)$ sorts a list $L$ of unsigned 32 bit integers and returns the sorted list. The idea is to sort these numbers by first sorting them with respect to their last byte, then to sort the list with respect to the second byte, then with respect to the third byte, and finally with respect to the most important byte. These four sorts are done using <em style="color:blue">counting sort</em>. The fact that <em style="color:blue">counting sort</em> is <em style="color:blue">stable</em> guarantees that when we sort with respect to the second byte, numbers that have the same second byte will still be sorted with respect to the first byte. End of explanation """ import random as rnd def demo(): L = [ rnd.randrange(1, 1000) for n in range(1, 16) ] print("L = ", L) S = radixSort(L) print("S = ", S) demo() def isOrdered(L): for i in range(len(L) - 1): assert L[i] <= L[i+1], f'L = {L}, i = {i}' from collections import Counter def sameElements(L, S): assert Counter(L) == Counter(S) """ Explanation: Testing End of explanation """ def testSort(n, k): for i in range(n): L = [ rnd.randrange(2**31) for x in range(k) ] oldL = L[:] L = radixSort(L) isOrdered(L) sameElements(oldL, L) print('.', end='') print() print("All tests successful!") %%time testSort(100, 20000) %%timeit k = 1_000_000 L = [ rnd.randrange(2*k) for x in range(k) ] S = radixSort(L) """ Explanation: The function $\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input. End of explanation """
erdewit/ib_insync
notebooks/ordering.ipynb
bsd-2-clause
from ib_insync import * util.startLoop() ib = IB() ib.connect('127.0.0.1', 7497, clientId=13) # util.logToConsole() """ Explanation: Ordering Warning: This notebook will place live orders Use a paper trading account (during market hours). End of explanation """ contract = Forex('EURUSD') ib.qualifyContracts(contract) order = LimitOrder('SELL', 20000, 1.11) """ Explanation: Create a contract and a market order: End of explanation """ trade = ib.placeOrder(contract, order) """ Explanation: placeOrder will place the order order and return a Trade object right away (non-blocking): End of explanation """ ib.sleep(1) trade.log """ Explanation: trade contains the order and everything related to it, such as order status, fills and a log. It will be live updated with every status change or fill of the order. End of explanation """ assert trade in ib.trades() """ Explanation: trade will also available from ib.trades(): End of explanation """ assert order in ib.orders() """ Explanation: Likewise for order: End of explanation """ limitOrder = LimitOrder('BUY', 20000, 0.05) limitTrade = ib.placeOrder(contract, limitOrder) limitTrade """ Explanation: Now let's create a limit order with an unrealistic limit: End of explanation """ ib.sleep(1) assert limitTrade.orderStatus.status == 'Submitted' assert limitTrade in ib.openTrades() """ Explanation: status will change from "PendingSubmit" to "Submitted": End of explanation """ limitOrder.lmtPrice = 0.10 ib.placeOrder(contract, limitOrder) """ Explanation: Let's modify the limit price and resubmit: End of explanation """ ib.cancelOrder(limitOrder) limitTrade.log """ Explanation: And now cancel it: End of explanation """ %%time order = MarketOrder('BUY', 100) trade = ib.placeOrder(contract, order) while not trade.isDone(): ib.waitOnUpdate() """ Explanation: placeOrder is not blocking and will not wait on what happens with the order. To make the order placement blocking, that is to wait until the order is either filled or canceled, consider the following: End of explanation """ ib.positions() """ Explanation: What are our positions? End of explanation """ sum(fill.commissionReport.commission for fill in ib.fills()) """ Explanation: What's the total of commissions paid today? End of explanation """ order = MarketOrder('SELL', 20000) ib.whatIfOrder(contract, order) ib.disconnect() """ Explanation: whatIfOrder can be used to see the commission and the margin impact of an order without actually sending the order: End of explanation """
zipeiyang/liupengyuan.github.io
chapter4/python爬虫入门.ipynb
mit
import requests from bs4 import BeautifulSoup import re """ Explanation: By liupengyuan[at]pku.edu.cn Project: https://github.com/liupengyuan/ 1. 什么是爬虫 简而言之,爬虫就是一段能够获取互联网信息(数据)的程序/工具。 一般需要通过抓取网页来获取互联网的信息与数据。 网页本身就是一个本文文件,只不过这个文本文件是由特定规则和符号标记的(HTML,超文本标记语言),称为超文本文件,也可称为网页源代码。 这段文本经过浏览器的解析(各类图片视频等在此过程中从网页外部加载),就成为我们日常浏览的网页展示给我们的样子了。 因此,爬虫首先就是要获得网页这个文本文件,并进行解析。 如果所需数据就直接在网页文本文件中,则可直接得到;如果所需数据在网页外部,则通过解析的结果得到数据所在地址,将该数据下载。 2. 最简爬虫 这里我们以优秀的python第三方库requests为例,该库是“唯一一个非转基因的Python HTTP库,人类可以安全享用”。 其中HTTP(HyperText Transfer Protocol)是互联网上应用最为广泛的一种网络协议,超文本文件传输时,必须遵守这个协议。 非转基因是作者幽默的说法,表明该库是原汁原味符合python设计理念,易用易于理解。 End of explanation """ r = requests.get('https://github.com/liupengyuan/python_tutorial/blob/master/chapter1/0.md') print(r.text[:1000]) """ Explanation: 导入必要的模块 End of explanation """ r = requests.get('http://www.163.com/') print(r.text[:1000]) """ Explanation: import requests,首先导入requests库 r = requests.get('https://github.com/liupengyuan/python_tutorial/blob/master/chapter1/0.md'),调用requests的get方法,向网址https://github.com/liupengyuan/python_tutorial/blob/master/chapter1/0.md 发送获取(get)请求,该方法返回一个应答(Response)对象r,其中包含该网页的所有信息。 print(r.text[:1000]),利用对象r的text属性字符串,打印网页内容,因为内容较多,暂取前1000个字符。 打开https://github.com/liupengyuan/python_tutorial/blob/master/chapter1/0.md网页,在网页正文区域点击右键,选择查看源代码,会发现代码示例D-1确实已经获取/抓取这个网页的文本文件。 至此,一个最简爬虫已经完成。 3. 静态定向爬虫基础 本小节中的爬虫是开始就指定了要抓取网页的地址(定向),要抓取的内容直接存在在网页源代码中(静态),因此称为静态定向爬虫。 由于需要用到Chrome浏览器中的检查功能,因此需要读者下载安装Chrome浏览器。 3.1 抓取网易首页(www.163.com)上的全部新闻 我们打开网易首页,然后用鼠标右键点击页面,用chrome浏览器可选择检查,进入开发者模式。 发者模式中,右侧为开发者工具面板,右上部的Elements标签页面中显示一片代码,是该网页html标记的文本(经过整理对齐的)。 推荐教程:http://www.w3school.com.cn/html/index.asp ,快速浏览,可了解一些html基础知识。 End of explanation """ p = r'<li>(.+)?</li>' contents = re.findall(p, r.text) print('\n'.join(contents[:5])) """ Explanation: 用鼠标右键在新闻标题上点击,并选择检查,可在Elements标签页面找到该新闻标题以及对应的链接在html代码中的位置。 代码大致如下形式: ``` <li class="cm_fb"> <a href="http://news.163.com/xxxxxx.html">yyyyyyyyy</a> ::after </li> `` - 其中xxxxx及yyyyyyy随新闻标题不同而不同。前者是新闻链接,后者新闻是标题,因此这两项是我们感兴趣并希望抓取的内容 - 在Elements标签页面中继续查看各个新闻标题,可以发现各个新闻其实都在标签对<li>及</li>`之间,且没有其间没有换行符。 End of explanation """ soup = BeautifulSoup(r.text, 'html.parser') contents = soup.find_all('ul', attrs='cm_ul_round') print('type is:', type(contents)) contents[:1] """ Explanation: p = r'&lt;li&gt;(.+)?&lt;/li&gt;' 构造了正则表达式,可提取&lt;li&gt;...&lt;/li&gt;之间的内容(不包括之间的换行符) re.findall(p, r.text) 将r.text中所有的符合上正则的提取出来放入列表 最终打印前50行,可以发现后面有一些并非新闻标题与对应链接,并非所有在&lt;li&gt;...&lt;/li&gt;之间的内容都是目标内容 有关正则表达式,可参考教程:https://github.com/liupengyuan/python_tutorial/blob/master/chapter3 中的正则表达式快速教程。可以通过构造一些正则表达式来完成网页解析将目标内容提取。 这里我们将介绍一个优秀的第三方网页解析包:Beautifulsoap。该包已经随Anaconda安装,名称为bs4。可以用这个包(必要时联合正则表达式)来进行网页解析。 该包已经通过from bs4 import BeautifulSoup在开始导入 End of explanation """ for line in contents: url_titles = line.find_all('a') for url_title in url_titles: url = url_title.get('href') title = url_title.string print(type(url_title), url, title) if input()=='b': break """ Explanation: 函数BeautifulSoup(r.text, 'html.parser'),返回一个BeautifulSoup对象。第一个参数为要解析的网页文本,第二个参数是解析器的选择,暂选择python内置的'html.parser'作为解析器 soup.find_all('ul', attrs='cm_ul_round'),BeautifulSoup对象的方法,可返回指定标签之间的所有内容的ResultSet对象。其中第一个参数为标签,第二个参数为标签的属性。根据新闻标题,在Elements标签页面稍加分析可知:新闻标题及链接均在标签对&lt;ul class="cm_ul_round"&gt;及&lt;/ul&gt;之间,且其他非新闻标题的内容,均不在这种规定了class的&lt;ul&gt;标签之间 End of explanation """ url_title_dict = {} for line in contents: url_titles = line.find_all('a') for url_title in url_titles: url = url_title.get('href') title = url_title.string if url and url.endswith(r'.html'): try: url_title_dict[url] = title except KeyError: continue """ Explanation: 可对ResultSet进行遍历,利用line.find_all('a')取得其中在&lt;a&gt;及&lt;/a&gt;标签对(该标签对表示超链接)中间的内容,link_texts的内容仍然是一个ResultSet对象,含有所有的link及text 对link_texts进行遍历,每一个对象均为Tag对象,利用Tag对象的link_text.get('href')方法,该方法可以取得标签的属性值,本例中参数为属性href,其值为超链接的URL 利用标签的string属性,取得该超链接的文本 End of explanation """ title_text_dict = {} for url, title in url_title_dict.items(): r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') html = soup.find('div', attrs = 'post_text') if html: passages = html.find_all('p') text = '' for passage in passages: if passage.string: text += passage.string title_text_dict[url] = title, text """ Explanation: 这里暂只需要得到首页的标题及对应的新闻页面链接,因此只抽取链接结尾为.html的条目 所有结果存入到词典url_title_dict中 我们还要获取各链接下的新闻文本 End of explanation """ import requests from bs4 import BeautifulSoup import re def get_163_home_news(filename): try: r = requests.get(r'http://www.163.com') except: print('Can not get the page.\n') return False soup = BeautifulSoup(r.text, 'html.parser') contents = soup.find_all('ul', attrs='cm_ul_round') error_url = [] f_err = open('error_urls', 'w', encoding = 'utf-8') fh = open(filename, 'w', encoding = 'utf-8') for line in contents: url_titles = line.find_all('a') for url_title in url_titles: url = url_title.get('href') title = url_title.string if url and url.endswith(r'.html'): try: r = requests.get(url) except: print('Error in getting:', url) error_url.append(url) continue soup = BeautifulSoup(r.text, 'html.parser') html = soup.find('div', attrs = 'post_text') if html: passages = html.find_all('p') text = '' for passage in passages: if passage.string: text += passage.string fh.write('{}\n{}\n{}\n'.format(url, title, text)) f_err.write('\n'.join(error_url)) fh.close() f_err.close() return True filename = r'news_163_home.text' get_163_home_news(filename) """ Explanation: 通过点击新闻页面,可以发现,所有正文在&lt;div class = 'post_text&gt;...&lt;/div&gt;中 通过soup对象的find()方法,可以得到上述标签中的内容,存放在html变量中 内容是由&lt;p&gt;...&lt;/p&gt;标签对之间(p即passage),因此需要利用find_all()方法提取之中所有的段落,放入passages中 遍历passages,其中每一个passage均是Tag对象,利用Tag对象的string方法,得到该标签对应的文本 至此,形成了一个词典,key为新闻的url,value为对应的标题与正文,可将上述程序整合,并将抓取内容保存到文件中 End of explanation """ import requests from bs4 import BeautifulSoup def get_word_sents(word): url = r'http://dict.youdao.com/example/blng/eng/{}/#keyfrom=dict.main.moreblng'.format(word) try: r = requests.get(url) except: print('Can not get the page.\n') return False word_sents = [] soup = BeautifulSoup(r.text, 'html.parser') contents = soup.find('ul', attrs = 'ol') sents = contents.find_all('p') for sent in sents: word_sents.append(sent.text.replace('\n','')+'\n') return word_sents """ Explanation: 3.2 抓取有道词典查询词的双语例句 分析: 通过在有道词典(dict.youdao.com)进行单词查询,发现对任意单词xxxxx,给出该单词翻译信息的页面URL为:http://dict.youdao.com/w/xxxxx 由于我要抓取词xxxxx中查询结果的所有双语例句,则需要在页面下部的显示最后一个例句后,点击更多双语例句,这会更新当前页面的URL为:http://dict.youdao.com/example/blng/eng/xxxxx/#keyfrom=dict.main.moreblng 。所有双语例句均在其中,因此,我们只对这个URL进行爬取分析即可 输入一个中文词汇进行查询,选中第一个双语例句,右键点击并选择检查,进入chrome开发者模式,发现所有例句均在&lt;ul class='ol'&gt;...&lt;/ul&gt;标签对之间。 每个例句均在&lt;p&gt;...&lt;/p&gt;标签对之间 End of explanation """ import os test_word = '爬虫' sents = get_word_sents(test_word) with open(test_word+'.txt', 'w', encoding = 'utf-8') as f: f.writelines(sents) """ Explanation: 整个过程与网易新闻抓取的方法类似 word_sents.append(sent.text.replace('\n','')+'\n'),是为了写入文件时候较为整齐 End of explanation """
gdsfactory/gdsfactory
docs/notebooks/01_references.ipynb
mit
import numpy as np import gdsfactory as gf gf.config.set_plot_options(show_subports=False) # Create a blank Component p = gf.Component("component_with_polygon") # Add a polygon xpts = [0, 0, 5, 6, 9, 12] ypts = [0, 1, 1, 2, 2, 0] p.add_polygon([xpts, ypts], layer=(2, 0)) # plot the Component with the polygon in it p """ Explanation: References and ports GDS allows defining the component once in memory and reference to that structure in other components. As you build complex components you can include references to other simpler components. Adding a reference is like having a pointer to a component. The GDSII specification allows the use of references, and similarly gdsfactory uses them (with the add_ref() function). So what is a reference? Simply put: A reference does not contain any geometry. It only points to an existing geometry. Say you have a ridiculously large polygon with 100 billion vertices that you call BigPolygon. It's huge, and you need to use it in your design 250 times. Well, a single copy of BigPolygon takes up 1MB of memory, so you don't want to make 250 copies of it. You can instead references the polygon 250 times. Each reference only uses a few bytes of memory -- it only needs to know the memory address of BigPolygon and a few other things. This way, you can keep one copy of BigPolygon and use it again and again. Let's start by making a blank geometry (Component) then adding a single polygon to it. End of explanation """ c = gf.Component("Component_with_references") # Create a new blank Component poly_ref = c.add_ref(p) # Reference the Component "p" that has the polygon in it c """ Explanation: Now, you want to reuse this polygon repeatedly without creating multiple copies of it. To do so, you need to make a second blank Component, this time called c. In this new Component you reference our Component p which contains our polygon. End of explanation """ poly_ref2 = c.add_ref(p) # Reference the Component "p" that has the polygon in it poly_ref3 = c.add_ref(p) # Reference the Component "p" that has the polygon in it c """ Explanation: you just made a copy of your polygon -- but remember, you didn't actually make a second polygon, you just made a reference (aka pointer) to the original polygon. Let's add two more references to c: End of explanation """ poly_ref2.rotate(15) # Rotate the 2nd reference we made 15 degrees poly_ref3.rotate(30) # Rotate the 3rd reference we made 30 degrees c """ Explanation: Now you have 3x polygons all on top of each other. Again, this would appear useless, except that you can manipulate each reference indepedently. Notice that when you called c.add_ref(p) above, we saved the result to a new variable each time (poly_ref, poly_ref2, and poly_ref3)? You can use those variables to reposition the references. End of explanation """ # Add a 2nd polygon to "p" xpts = [14, 14, 16, 16] ypts = [0, 2, 2, 0] p.add_polygon([xpts, ypts], layer=(1, 0)) p """ Explanation: Now you're getting somewhere! You've only had to make the polygon once, but you're able to reuse it as many times as you want. Modifying the referenced geometry What happens when you change the original geometry that the reference points to? In your case, your references in c all point to the Component p that with the original polygon. Let's try adding a second polygon to p. First you add the second polygon and make sure P looks like you expect: End of explanation """ c """ Explanation: That looks good. Now let's find out what happened to c that contains the three references. Keep in mind that you have not modified c or executed any functions/operations on c -- all you have done is modify p. End of explanation """ c2 = gf.Component() # Create a new blank Component d_ref1 = c2.add_ref(c) # Reference the Component "c" that 3 references in it d_ref2 = c2 << c # Use the "<<" operator to create a 2nd reference to c d_ref3 = c2 << c # Use the "<<" operator to create a 3rd reference to c d_ref1.move([20, 0]) d_ref2.move([40, 0]) c2 """ Explanation: When you modify the original geometry, all of the references automatically reflect the modifications. This is very powerful, because you can use this to make very complicated designs from relatively simple elements in a computation- and memory-efficienct way. Let's try making references a level deeper by referencing c. Note here we use the &lt;&lt; operator to add the references -- this is just shorthand, and is exactly equivalent to using add_ref() End of explanation """ c = gf.Component("reference_sample") w = gf.components.straight(width=0.6) wr = w.ref() c.add(wr) c """ Explanation: As you've seen you have two ways to add a reference to our component: create the reference and add it to the component End of explanation """ c = gf.Component("reference_sample_shorter_syntax") wr = c << gf.components.straight(width=0.6) c """ Explanation: or do it in a single line End of explanation """ import gdsfactory as gf c = gf.Component("two_references") wr1 = c << gf.components.straight(width=0.6) wr2 = c << gf.components.straight(width=0.6) wr2.movey(10) c.add_ports(wr1.get_ports_list(), prefix="top_") c.add_ports(wr2.get_ports_list(), prefix="bot_") c.ports """ Explanation: in both cases you can move the reference wr after created End of explanation """ c.auto_rename_ports() c.ports c """ Explanation: You can also auto_rename ports using gdsfactory default convention, where ports are numbered clockwise starting from the bottom left End of explanation """ c3 = gf.Component() # Create a new blank Component aref = c3.add_array( c, columns=6, rows=3, spacing=[20, 15] ) # Reference the Component "c" 3 references in it with a 3 rows, 6 columns array c3 """ Explanation: Arrays of references In GDS, there's a type of structure called a "CellArray" which takes a cell and repeats it NxM times on a fixed grid spacing. For convenience, Component includes this functionality with the add_array() function. Note that CellArrays are not compatible with ports (since there is no way to access/modify individual elements in a GDS cellarray) gdsfactory also provides with more flexible arrangement options if desired, see for example grid() and packer(). As well as gf.components.array Let's make a new Component and put a big array of our Component c in it: End of explanation """ c4 = gf.Component() # Create a new blank Component aref = c4 << gf.components.array(component=c, columns=3, rows=2) c4.add_ports(aref.get_ports_list()) c4 gf.components.array? """ Explanation: CellArrays don't have ports and there is no way to access/modify individual elements in a GDS cellarray. gdsfactory provides you with similar functions in gf.components.array and gf.components.array_2d End of explanation """ import gdsfactory as gf @gf.cell def dbr_period(w1=0.5, w2=0.6, l1=0.2, l2=0.4, straight=gf.components.straight): """Return one DBR period.""" c = gf.Component() r1 = c << straight(length=l1, width=w1) r2 = c << straight(length=l2, width=w2) r2.connect(port="o1", destination=r1.ports["o2"]) c.add_port("o1", port=r1.ports["o1"]) c.add_port("o2", port=r2.ports["o2"]) return c l1 = 0.2 l2 = 0.4 n = 3 period = dbr_period(l1=l1, l2=l2) period dbr = gf.Component("DBR") dbr.add_array(period, columns=n, rows=1, spacing=(l1 + l2, 100)) dbr """ Explanation: You can also create an array of references for periodic structures. Lets create a Distributed Bragg Reflector End of explanation """ p0 = dbr.add_port("o1", port=period.ports["o1"]) p1 = dbr.add_port("o2", port=period.ports["o2"]) p1.midpoint = [(l1 + l2) * n, 0] dbr """ Explanation: Finally we need to add ports to the new component End of explanation """ bend = gf.components.bend_circular() bend c = gf.Component("sample_reference_connect") mmi = c << gf.components.mmi1x2() b = c << gf.components.bend_circular() b.connect("o1", destination=mmi.ports["o2"]) c.add_port("o1", port=mmi.ports["o1"]) c.add_port("o2", port=b.ports["o2"]) c.add_port("o3", port=mmi.ports["o3"]) c """ Explanation: Connect references We have seen that once you create a reference you can manipulate the reference to move it to a location. Here we are going to connect that reference to a port. Remeber that we follow that a certain reference source connects to a destination port End of explanation """ import gdsfactory as gf size = 4 c = gf.components.nxn(west=2, south=2, north=2, east=2, xsize=size, ysize=size) c c = gf.components.straight_heater_metal(length=30) c c.ports """ Explanation: Port naming You have the freedom to name the ports as you want, and you can use gf.port.auto_rename_ports(prefix='o') to rename them later on. Here is the default naming convention. Ports are numbered clock-wise starting from the bottom left corner Optical ports have o prefix and Electrical ports e prefix The port naming comes in most cases from the gdsfactory.cross_section. For example gdsfactory.cross_section.strip has ports o1 for input and o2 for output gdsfactory.cross_section.metal1 has ports e1 for input and e2 for output End of explanation """ c.get_ports_dict(layer=(1, 0)) """ Explanation: You can get the optical ports by layer End of explanation """ c.get_ports_dict(width=0.5) c0 = gf.components.straight_heater_metal() c0.ports c1 = c0.copy() c1.auto_rename_ports_layer_orientation() c1.ports c2 = c0.copy() c2.auto_rename_ports() c2.ports """ Explanation: or by width End of explanation """ import gdsfactory as gf c = gf.Component("demo_ports") nxn = gf.components.nxn(west=2, north=2, east=2, south=2, xsize=4, ysize=4) ref = c.add_ref(nxn) c.add_ports(ref.ports) c ref.get_ports_list() # by default returns ports clockwise starting from bottom left west facing port c.auto_rename_ports() c """ Explanation: You can also rename them with a different port naming convention prefix: add e for electrical o for optical clockwise counter-clockwise orientation E East, W West, N North, S South Here is the default one we use (clockwise starting from bottom left west facing port) ``` 3 4 || 2 -| |- 5 | | 1 -|____|- 6 | | 8 7 ``` End of explanation """ c.auto_rename_ports_counter_clockwise() c c.get_ports_list(clockwise=False) c.ports_layer c.port_by_orientation_cw("W0") c.port_by_orientation_ccw("W1") """ Explanation: You can also get the ports counter-clockwise ``` 4 3 || 5 -| |- 2 | | 6 -|____|- 1 | | 7 8 ``` End of explanation """ import gdsfactory as gf nxn = gf.components.nxn( west=2, north=2, east=2, south=2, cross_section=gf.cross_section.strip, xsize=4, ysize=4, ) c = gf.components.extension.extend_ports(component=nxn, orientation=0) c c.ports """ Explanation: Lets extend the East facing ports (orientation = 0 deg) End of explanation """ gf.components.mmi1x2(decorator=gf.add_pins.add_pins) gf.components.mmi1x2(decorator=gf.add_pins.add_pins_triangle) """ Explanation: pins You can add pins (port markers) to each port. Each foundry PDK does this differently, so gdsfactory supports all of them. square with port inside the component square centered (half inside, half outside component) triangular path (SiEPIC) by default Component.show() will add triangular pins, so you can see the direction of the port in Klayout. End of explanation """ import gdsfactory as gf bend180 = gf.components.bend_circular180() wg_pin = gf.components.straight_pin(length=40) wg = gf.components.straight() # Define a map between symbols and (component, input port, output port) symbol_to_component = { "D": (bend180, "o1", "o2"), "C": (bend180, "o2", "o1"), "P": (wg_pin, "o1", "o2"), "-": (wg, "o1", "o2"), } # Generate a sequence # This is simply a chain of characters. Each of them represents a component # with a given input and and a given output sequence = "DC-P-P-P-P-CD" component = gf.components.component_sequence( sequence=sequence, symbol_to_component=symbol_to_component ) component.name = "component_sequence" component """ Explanation: component_sequence When you have repetitive connections you can describe the connectivity as an ASCII map End of explanation """
eaton-lab/toytree
docs/6-treenodes.ipynb
bsd-3-clause
import toytree import toyplot import numpy as np # generate a random tree tre = toytree.rtree.unittree(ntips=10, seed=12345) """ Explanation: TreeNode objects The .treenode attribute of ToyTrees allows users to access the underlying TreeNode structure directly. This is where you can traverse the tree and query the parent/child relationships of nodes. While this is used extensively within the code of toytree, most users will likely not need to interact with TreeNodes in order do most things they want toytree for (i.e., drawing). However, for power users, the TreeNode structure of toytrees provides a lot of additional functionality especially for doing scientific computation and research on trees. The TreeNode object in toytree is a modified fork of the TreeNode in ete3. Thus, you can read the very detailed ete documentation if you want a detailed understanding of the object. End of explanation """ # the .treenode attribute of the ToyTree returns its root TreeNode tre.treenode # the .idx_dict of a toytree makes TreeNodes accessible by index tre.idx_dict """ Explanation: TreeNode objects are always nested inside of ToyTree objects, and accessed from ToyTrees. When you use .treenode to access a TreeNode from a ToyTree you are actually accessing the top level node of the tree structure, the root. The root TreeNode is connected to every other TreeNode in the tree, and together they describe the tree structure. End of explanation """ print('levelorder:', [node.idx for node in tre.treenode.traverse("levelorder")]) print('preorder: ', [node.idx for node in tre.treenode.traverse("preorder")]) print('postorder: ', [node.idx for node in tre.treenode.traverse("postorder")]) tre.draw(node_labels=True, node_sizes=16); """ Explanation: Traversing TreeNodes To traverse a tree means to move from node to node to visit every node of the tree. In this case, we move from TreeNode to TreeNode. Depending on your reason for traversing the tree, the order in which nodes are visited may be arbitrary, or, it may actually be very important. For example, if you wish to calculate some new value on a node that depends on the values of its children, then you will want to visit the child nodes before you visit their parents. TreeNodes can be traversed in three ways. Below I print the order that nodes are visited for each. You can see the node index labels plotted on the tree which toytree uses to order nodes for plotting. End of explanation """ # traverse the tree and access node attributes for node in tre.treenode.traverse(strategy="levelorder"): print("{:<5} {:<5} {:<5} {:<5}".format( node.idx, node.name, node.is_leaf(), node.is_root() ) ) """ Explanation: TreeNodes have a large number of attributes and functions available to them which you can explore using tab-completion in a notebook and from the ete3 tutorial. In general, only advanced users will need to access attributes of the TreeNodes directly. For example, it is easier to access node idx and name labels from ToyTrees than from TreeNodes, since ToyTrees will return the values in the order they will be plotted. End of explanation """ # see available features on a ToyTree tre.features """ Explanation: Adding features to TreeNodes For the purposes of plotting, there are cases where accessing TreeNode attributes can be particularly powerful. For example, when you want to build a list of values for plotting that are based on the tree structure itself (number of children, edge length, is_root, etc.). You can traverse through the tree and calculate these attributes for each node. When doing so, I have a recommended best practice that once again is intended to help users avoid accidentally plotting values in an incorrect order. This recommended practice is to add new features to the TreeNodes by traversing the tree, but then to retrieve and plot the features from the TreeNodes using ToyTree, since ToyTrees are the objects that organize the coordinates for plotting. End of explanation """ # set a feature a few nodes with a new name tre = tre.set_node_values( feature="name", values={0: 'tip-0', 1: 'tip-1', 2: 'tip-2'}, ) # set a feature to every node of a random integer in 1-5 tre = tre.set_node_values( feature="randomint", values={idx: np.random.randint(1, 5) for idx in tre.idx_dict}, ) """ Explanation: Let's say we wanted to plot a value on each node of a toytree. You can use the toytree function .set_node_values() to set a value to each node. This takes the feature name, a dictionary mapping values to idx labels, and optionally a default value that is assigned to all other nodes. You can modify existing features or set new features. End of explanation """ # set a feature to every node for the number of descendants tre = tre.set_node_values( feature="ndesc", values={ idx: len(node.get_leaves()) for (idx, node) in tre.idx_dict.items() } ) """ Explanation: Another potentially useful 'feature' to access includes statistics about the tree. For example, we may want to measure the number of extant descendants of each node on a tree. Such things can be measured directly from TreeNode objects. Below I use get_leaves() as an example. You can see the ete3 docs for more info on TreeNode functions and attributes. End of explanation """ # add a new feature to every node for node in tre.treenode.traverse(): node.add_feature("ndesc", len(node.get_leaves())) """ Explanation: The set_node_values() function of toytrees operates similarly to the loop below which visits each TreeNode of the tree and adds a feature. The .traverse() function of treenodes is convenient for accessing all nodes. End of explanation """ # ndesc is now an available feature alongside the defaults tre.features # it can be accessed from the ToyTree object using .get_node_values() tre.get_node_values('ndesc', True, True) # and can be accessed by shortcut using just the feature name to 'node_labels' tre.draw(node_labels=("ndesc", 1, 0), node_sizes=15); """ Explanation: Modifying features of TreeNodes Note: Use caution when modifying features of TreeNode objects because you can easily mess up the data that toytree needs in order to correctly plot trees and orient nodes, and tips, etc. This is why interacting with TreeNode objects directly should be considered an advanced method for toytree users. In contrast to ToyTree functions, which do not modify the tree structure in place, but instead return a copy, modification to TreeNodes do occur in place and therefore effect the current tree. Be aware that if you modify the parent/child relationships in the TreeNode it will change the tree. Similarly, if you change the .dist or .idx values of nodes it will effect the edge lengths and the order in which nodes are plotted. Accessing features from ToyTrees The recommended workflow for adding features to TreeNodes and including them in toytree drawings is to use ToyTrees to retrieve the features, since ToyTree ensure the correct order. When you add a new feature to TreeNodes it can then be accessed by ToyTrees just like other default features: "height", "idx", "name", etc. You can use .get_node_values() to retrive them in the proper order, and to censor values for the root or tips if wanted. This also allows you to further build color mappings based on these values, calculate further statistics, etc. End of explanation """ # traverse the tree and modify nodes (add new 'color' feature) for node in tre.treenode.traverse(): if node.is_leaf(): node.add_feature('color', toytree.colors[1]) else: node.add_feature('color', toytree.colors[2]) # store color list with values for tips and root colors = tre.get_node_values('color', show_root=1, show_tips=1) # draw tree with node colors tre.draw(node_labels=False, node_colors=colors, node_sizes=15); """ Explanation: Here is another example where color values are stored on TreeNodes and then retrieved from the ToyTree, and then used as draw argument to color nodes based on their TreeNode attribute. The nodes are colored based on whether the TreeNode was True or False for the .is_leaf(). We use the default color palette of toytree accessed from toytree.colors. End of explanation """
fweik/espresso
doc/tutorials/ferrofluid/ferrofluid_part1.ipynb
gpl-3.0
import espressomd espressomd.assert_features('DIPOLES', 'LENNARD_JONES') from espressomd.magnetostatics import DipolarP3M from espressomd.magnetostatic_extensions import DLC from espressomd.cluster_analysis import ClusterStructure from espressomd.pair_criteria import DistanceCriterion import numpy as np """ Explanation: Ferrofluid - Part 1 Table of Contents Introduction The Model Structure of this tutorial Compiling ESPResSo for this Tutorial A Monolayer-Ferrofluid System in ESPResSo Setup Sampling Sampling with animation Sampling without animation Cluster distribution Introduction Ferrofluids are colloidal suspensions of ferromagnetic single-domain particles in a liquid carrier. As the single particles contain only one magnetic domain, they can be seen as small permanent magnets. To prevent agglomeration of the particles, due to van-der-Waals or magnetic attraction, they are usually sterically or electrostatically stabilized (see <a href='#fig_1'>figure 1</a>). The former is achieved by adsorption of long chain molecules onto the particle surface, the latter by adsorption of charged coating particles. The size of the ferromagnetic particles are in the region of 10 nm. With the surfactant layer added they can reach a size of a few hundred nanometers. Have in mind that if we refer to the particle diameter $\sigma$ we mean the diameter of the magnetic core plus two times the thickness of the surfactant layer. Some of the liquid properties, like the viscosity, the phase behavior or the optical birefringence can be altered via an external magnetic field or simply the fluid can be guided by such an field. Thus ferrofluids possess a wide range of biomedical applications like magnetic drug targeting or magnetic thermoablation and technical applications like fine positioning systems or adaptive bearings and dampers. In <a href='#fig_2'>figure 2</a> the picture of a ferrofluid exposed to the magnetic field of a permanent magnet is shown. The famous energy minimizing thorn-like surface is clearly visible. <a id='fig_1'></a><figure> <img src="figures/Electro-Steric_Stabilization.jpg" style="float: center; width: 49%"> <center> <figcaption>Figure 1: Schematic representation of electrostatically stabilization (picture top) and steric stabilization (picture bottom) <a href='#[3]'>[3]</a></figcaption> </center> </figure> <a id='fig_2'></a><figure> <img src='figures/Ferrofluid_Magnet_under_glass_edit.jpg' alt='ferrofluid on glass plate under which a strong magnet is placed' style='width: 600px;'/> <center> <figcaption>Figure 2: Real Ferrofluid exposed to an external magnetic field (neodymium magnet) <a href='#[4]'>[4]</a></figcaption> </center> </figure> The Model For simplicity in this tutorial we simulate spherical particles in a monodisperse ferrofluid system which means all particles have the same diameter $\sigma$ and dipole moment $\mu$. The point dipole moment is placed at the center of the particles and is constant both in magnitude and direction (in the coordinate system of the particle). This can be justified as the Néel relaxation times are usually negligible for the usual sizes of ferrofluid particles. Thus the magnetic interaction potential between two single particles is the dipole-dipole interaction potential which reads \begin{equation} u_{\text{DD}}(\vec{r}{ij}, \vec{\mu}_i, \vec{\mu}_j) = \gamma \left(\frac{\vec{\mu}_i \cdot \vec{\mu}_j}{r{ij}^3} - 3\frac{(\vec{\mu}i \cdot \vec{r}{ij}) \cdot (\vec{\mu}j \cdot \vec{r}{ij})}{r_{ij}^5}\right) \end{equation} with $\gamma = \frac{\mu_0}{4 \pi}$ and $\mu_0$ the vacuum permeability. For the steric interaction in this tutorial we use the purely repulsive Weeks-Chandler-Andersen (WCA) potential which is a Lennard-Jones potential with cut-off radius $r_{\text{cut}}$ at the minimum of the potential $r_{\text{cut}} = r_{\text{min}} = 2^{\frac{1}{6}}\cdot \sigma$ and shifted by $\varepsilon_{ij}$ such that the potential is continuous at the cut-off radius. Thus the potential has the shape \begin{equation} u_{\text{sr}}^{\text{WCA}}(r_{ij}) = \left{ \begin{array}{ll} 4\varepsilon_{ij}\left[ \left( \frac{\sigma}{r_{ij}} \right)^{12} - \left( \frac{\sigma}{r_{ij}} \right)^6 \right] + \varepsilon_{ij} & r_{ij} < r_{\text{cut}} \ 0 & r_{ij} \geq r_{\text{cut}} \ \end{array} \right. \end{equation} where $r_{ij}$ are the distances between two particles. The purely repulsive character of the potential can be justified by the fact that the particles in real ferrofluids are sterically or electrostatically stabilized against agglomeration. The whole interaction potential reads \begin{equation} u(\vec{r}{ij}, \vec{\mu}_i, \vec{\mu}_j) = u{\text{sr}}(\vec{r}{ij}) + u{\text{DD}}(\vec{r}_{ij}, \vec{\mu}_i, \vec{\mu}_j) \end{equation} The liquid carrier of the system is simulated through a Langevin thermostat. For ferrofluid systems there are three important parameters. The first is the volume fraction in three dimensions or the area fraction in two dimensions or quasi two dimensions. The second is the dipolar interaction parameter $\lambda$ \begin{equation} \lambda = \frac{\tilde{u}{\text{DD}}}{u_T} = \gamma \frac{\mu^2}{k{\text{B}}T\sigma^3} \end{equation} where $u_\mathrm{T} = k_{\text{B}}T$ is the thermal energy and $\tilde{u}_{DD}$ is the absolute value of the dipole-dipole interaction energy at close contact (cc) and head-to-tail configuration (htt) (see <a href='#fig_4'>figure 4</a>) per particle, i.e. in formulas it reads \begin{equation} \tilde{u}{\text{DD}} = \frac{ \left| u{\text{DD}}^{\text{htt, cc}} \right| }{2} \end{equation} The third parameter takes a possible external magnetic field into account and is called Langevin parameter $\alpha$. It is the ratio between the energy of a dipole moment in the external magnetic field $B$ and the thermal energy \begin{equation} \alpha = \frac{\mu_0 \mu}{k_{\text{B}} T}B \end{equation} <a id='fig_4'></a><figure> <img src='figures/headtotailconf.png' alt='schematic representation of head to tail configuration' style='width: 200px;'/> <center> <figcaption>Figure 4: Schematic representation of the head-to-tail configuration of two magnetic particles at close contact.</figcaption> </center> </figure> Structure of this tutorial The aim of this tutorial is to introduce the basic features of ESPResSo for ferrofluids or dipolar fluids in general. In part I and part II we will do this for a monolayer-ferrofluid, in part III for a three dimensional system. In part I we will examine the clusters which are present in all interesting ferrofluid systems. In part II we will examine the influence of the dipole-dipole-interaction on the magnetization curve of a ferrofluid. In part III we calculate estimators for the initial susceptibility using fluctuation formulas and sample the magnetization curve. We assume the reader is familiar with the basic concepts of Python and MD simulations. Remark: The equilibration and sampling times used in this tutorial would be not sufficient for scientific purposes, but they are long enough to get at least a qualitative insight of the behaviour of ferrofluids. They have been shortened so we achieve reasonable computation times for the purpose of a tutorial. Compiling ESPResSo for this Tutorial For this tutorial the following features of ESPResSo are needed ```c++ define EXTERNAL_FORCES define ROTATION define DIPOLES define LENNARD_JONES ``` Please uncomment them in the <tt>myconfig.hpp</tt> and compile ESPResSo using this <tt>myconfig.hpp</tt>. A Monolayer-Ferrofluid System in ESPResSo For interesting ferrofluid systems, where the fraction of ferromagnetic particles in the liquid carrier and their dipole moment are not vanishingly small, the ferromagnetic particles form clusters of different shapes and sizes. If the fraction and/or dipole moments are big enough the clusters can interconnect with each other and form a whole space occupying network. In this part we want to investigate the number of clusters as well as their shape and size in our simulated monolayer ferrofluid system. It should be noted that a monolayer is a quasi three dimensional system (q2D), i.e. two dimensional for the positions and three dimensional for the orientation of the dipole moments. Setup We start with checking for the presence of ESPResSo features and importing all necessary packages. End of explanation """ # Lennard-Jones parameters LJ_SIGMA = 1 LJ_EPSILON = 1 LJ_CUT = 2**(1. / 6.) * LJ_SIGMA # Particles N_PART = 1200 # Area fraction of the mono-layer PHI = 0.1 # Dipolar interaction parameter lambda = mu_0 m^2 /(4 pi sigma^3 KT) DIP_LAMBDA = 4 # Temperature KT = 1.0 # Friction coefficient GAMMA = 1.0 # Time step TIME_STEP = 0.01 """ Explanation: Now we set up all simulation parameters. End of explanation """ # System setup # BOX_SIZE = ... print("Box size", BOX_SIZE) # Note that the dipolar P3M and dipolar layer correction need a cubic # simulation box for technical reasons. system = espressomd.System(box_l=(BOX_SIZE, BOX_SIZE, BOX_SIZE)) system.time_step = TIME_STEP """ Explanation: Note that we declared a <tt>lj_cut</tt>. This will be used as the cut-off radius of the Lennard-Jones potential to obtain a purely repulsive WCA potential. Now we set up the system. The length of the simulation box is calculated using the desired area fraction and the area all particles occupy. Then we create the ESPResSo system and pass the simulation step. For the Verlet list skin parameter we use the built-in tuning algorithm of ESPResSo. Exercise: How large does BOX_SIZE have to be for a system of N_PART particles with a volume (area) fraction PHI? Define BOX_SIZE. $$ L_{\text{box}} = \sqrt{\frac{N A_{\text{sphere}}}{\varphi}} $$ python BOX_SIZE = (N_PART * np.pi * (LJ_SIGMA / 2.)**2 / PHI)**0.5 End of explanation """ # Lennard-Jones interaction system.non_bonded_inter[0, 0].lennard_jones.set_params(epsilon=LJ_EPSILON, sigma=LJ_SIGMA, cutoff=LJ_CUT, shift="auto") """ Explanation: Now we set up the interaction between the particles as a non-bonded interaction and use the Lennard-Jones potential as the interaction potential. Here we use the above mentioned cut-off radius to get a purely repulsive interaction. End of explanation """ # Random dipole moments # ... # dip = ... # Random positions in the monolayer pos = BOX_SIZE * np.hstack((np.random.random((N_PART, 2)), np.zeros((N_PART, 1)))) """ Explanation: Now we generate random positions and orientations of the particles and their dipole moments. Hint: It should be noted that we seed the random number generator of numpy. Thus the initial configuration of our system is the same every time this script will be executed. You can change it to another one to simulate with a different initial configuration. Exercise: How does one set up randomly oriented dipole moments? Hint: Think of the way that different methods could introduce a bias in the distribution of the orientations. Create a variable dip as a N_PART x 3 numpy array, which contains the randomly distributed dipole moments. ```python Random dipole moments np.random.seed(seed=1) dip_phi = 2. * np.pi * np.random.random((N_PART, 1)) dip_cos_theta = 2 * np.random.random((N_PART, 1)) - 1 dip_sin_theta = np.sin(np.arccos(dip_cos_theta)) dip = np.hstack(( dip_sin_theta * np.sin(dip_phi), dip_sin_theta * np.cos(dip_phi), dip_cos_theta)) ``` End of explanation """ # Add particles system.part.add(pos=pos, rotation=N_PART * [(1, 1, 1)], dip=dip, fix=N_PART * [(0, 0, 1)]) """ Explanation: Now we add the particles with their positions and orientations to our system. Thereby we activate all degrees of freedom for the orientation of the dipole moments. As we want a two dimensional system we only allow the particles to translate in $x$- and $y$-direction and not in $z$-direction by using the <tt>fix</tt> argument. End of explanation """ # Set integrator to steepest descent method system.integrator.set_steepest_descent( f_max=0, gamma=0.1, max_displacement=0.05) """ Explanation: Be aware that we do not set the magnitude of the magnetic dipole moments to the particles. As in our case all particles have the same dipole moment it is possible to rewrite the dipole-dipole interaction potential to \begin{equation} u_{\text{DD}}(\vec{r}{ij}, \vec{\mu}_i, \vec{\mu}_j) = \gamma \mu^2 \left(\frac{\vec{\hat{\mu}}_i \cdot \vec{\hat{\mu}}_j}{r{ij}^3} - 3\frac{(\vec{\hat{\mu}}i \cdot \vec{r}{ij}) \cdot (\vec{\hat{\mu}}j \cdot \vec{r}{ij})}{r_{ij}^5}\right) \end{equation} where $\vec{\hat{\mu}}_i$ is the unit vector of the dipole moment $i$ and $\mu$ is the magnitude of the dipole moments. Thus we can only prescribe the initial orientation of the dipole moment to the particles and take the magnitude of the moments into account when calculating the dipole-dipole interaction with Dipolar P3M, by modifying the original Dipolar P3M prefactor $\gamma$ such that \begin{equation} \tilde{\gamma} = \gamma \mu^2 = \frac{\mu_0}{4\pi}\mu^2 = \lambda \sigma^3 k_{\text{B}}T \end{equation} Of course it would also be possible to prescribe the whole dipole moment vectors to every particle and leave the prefactor of Dipolar P3M unchanged ($\gamma$). In fact we have to do this if we want to simulate polydisperse systems. Now we choose the steepest descent integrator to remove possible overlaps of the particles. End of explanation """ # Switch to velocity Verlet integrator system.integrator.set_vv() system.thermostat.set_langevin(kT=KT, gamma=GAMMA, seed=1) """ Explanation: Exercise: Perform a steepest descent energy minimization. Track the relative energy change $E_{\text{rel}}$ per minimization loop (where the integrator is run for 10 steps) and terminate once $E_{\text{rel}} \le 0.05$, i.e. when there is less than a 5% difference in the relative energy change in between iterations. ```python import sys energy = system.analysis.energy()['total'] relative_energy_change = 1.0 while relative_energy_change > 0.05: system.integrator.run(10) energy_new = system.analysis.energy()['total'] # Prevent division by zero errors: if energy < sys.float_info.epsilon: break relative_energy_change = (energy - energy_new) / energy print(f"Minimization, relative change in energy: {relative_energy_change}") energy = energy_new ``` For the simulation of our system we choose the velocity Verlet integrator. After that we set up the thermostat which is, in our case, a Langevin thermostat to simulate in an NVT ensemble. Hint: It should be noted that we seed the Langevin thermostat, thus the time evolution of the system is partly predefined. Partly because of the numeric accuracy and the automatic tuning algorithms of Dipolar P3M and DLC where the resulting parameters are slightly different every time. You can change the seed to get a guaranteed different time evolution. End of explanation """ # Setup dipolar P3M and dipolar layer correction dp3m = DipolarP3M(accuracy=5E-4, prefactor=DIP_LAMBDA * LJ_SIGMA**3 * KT) dlc = DLC(maxPWerror=1E-4, gap_size=BOX_SIZE - LJ_SIGMA) system.actors.add(dp3m) system.actors.add(dlc) # tune verlet list skin system.cell_system.tune_skin(min_skin=0.4, max_skin=2., tol=0.2, int_steps=100) # print skin value print('tuned skin = {}'.format(system.cell_system.skin)) """ Explanation: To calculate the dipole-dipole interaction we use the Dipolar P3M method (see Ref. <a href='#[1]'>[1]</a>) which is based on the Ewald summation. By default the boundary conditions of the system are set to conducting which means the dielectric constant is set to infinity for the surrounding medium. As we want to simulate a two dimensional system we additionally use the dipolar layer correction (DLC) (see Ref. <a href='#[2]'>[2]</a>). As we add <tt>DipolarP3M</tt> to our system as an actor, a tuning function is started automatically which tries to find the optimal parameters for Dipolar P3M and prints them to the screen. The last line of the output is the value of the tuned skin. End of explanation """ # Equilibrate print("Equilibration...") EQUIL_ROUNDS = 20 EQUIL_STEPS = 1000 for i in range(EQUIL_ROUNDS): system.integrator.run(EQUIL_STEPS) print( f"progress: {(i + 1) * 100. / EQUIL_ROUNDS}%, dipolar energy: {system.analysis.energy()['dipolar']}", end="\r") print("\nEquilibration done") """ Explanation: Now we equilibrate the dipole-dipole interaction for some time End of explanation """ LOOPS = 100 """ Explanation: Sampling The system will be sampled over 100 loops. End of explanation """ import matplotlib.pyplot as plt import matplotlib.animation as animation from tempfile import NamedTemporaryFile import base64 VIDEO_TAG = """<video controls> <source src="data:video/x-m4v;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video>""" def anim_to_html(anim): if not hasattr(anim, '_encoded_video'): with NamedTemporaryFile(suffix='.mp4') as f: anim.save(f.name, fps=20, extra_args=['-vcodec', 'libx264']) with open(f.name, "rb") as g: video = g.read() anim._encoded_video = base64.b64encode(video).decode('ascii') plt.close(anim._fig) return VIDEO_TAG.format(anim._encoded_video) animation.Animation._repr_html_ = anim_to_html def init(): # Set x and y range ax.set_ylim(0, BOX_SIZE) ax.set_xlim(0, BOX_SIZE) x_data, y_data = [], [] part.set_data(x_data, y_data) return part, """ Explanation: As the system is two dimensional, we can simply do a scatter plot to get a visual representation of a system state. To get a better insight of how a ferrofluid system develops during time we will create a video of the development of our system during the sampling. If you only want to sample the system simply go to Sampling without animation Sampling with animation To get an animation of the system development we have to create a function which will save the video and embed it in an html string. End of explanation """ fig, ax = plt.subplots(figsize=(10, 10)) part, = ax.plot([], [], 'o') animation.FuncAnimation(fig, run, frames=LOOPS, blit=True, interval=0, repeat=False, init_func=init) """ Explanation: Exercise: In the following an animation loop is defined, however it is incomplete. Extend the code such that in every loop the system is integrated for 100 steps. Afterwards x_data and y_data have to be populated by the folded $x$- and $y$- positions of the particles. (You may copy and paste the incomplete code template to the empty cell below.) ```python def run(i): # < excercise > # Save current system state as a plot x_data, y_data = # &lt; excercise &gt; ax.figure.canvas.draw() part.set_data(x_data, y_data) print("progress: {:3.0f}%".format((i + 1) * 100. / LOOPS), end="\r") return part, ``` ```python def run(i): system.integrator.run(100) # Save current system state as a plot x_data, y_data = system.part[:].pos_folded[:, 0], system.part[:].pos_folded[:, 1] ax.figure.canvas.draw() part.set_data(x_data, y_data) print("progress: {:3.0f}%".format((i + 1) * 100. / LOOPS), end="\r") return part, ``` Now we use the <tt>animation</tt> class of <tt>matplotlib</tt> to save snapshots of the system as frames of a video which is then displayed after the sampling is finished. Between two frames are 100 integration steps. In the video chain-like and ring-like clusters should be visible, as well as some isolated monomers. End of explanation """ n_clusters = [] cluster_sizes = [] """ Explanation: Cluster analysis To quantify the number of clusters and their respective sizes, we now want to perform a cluster analysis. For that we can use ESPREsSo's cluster analysis class. Exercise: Setup a cluster analysis object (ClusterStructure class) and assign its instance to the variable cluster_structure. As criterion for the cluster analysis use a distance criterion where particles are assumed to be part of a cluster if the neaest neighbors are closer than $1.3\sigma_{\text{LJ}}$. ```python Setup cluster analysis cluster_structure = ClusterStructure(pair_criterion=DistanceCriterion(cut_off=1.3 * LJ_SIGMA)) ``` Now we sample our system for some time and do a cluster analysis in order to get an estimator of the cluster observables. For the cluster analysis we create two empty lists. The first for the number of clusters and the second for their respective sizes. End of explanation """ import matplotlib.pyplot as plt plt.figure(figsize=(10, 10)) plt.xlim(0, BOX_SIZE) plt.ylim(0, BOX_SIZE) plt.xlabel('x-position', fontsize=20) plt.ylabel('y-position', fontsize=20) plt.plot(system.part[:].pos_folded[:, 0], system.part[:].pos_folded[:, 1], 'o') plt.show() """ Explanation: Sampling without animation The following code just samples the system and does a cluster analysis every <tt>loops</tt> (100 by default) simulation steps. Exercise: Write an integration loop which runs a cluster analysis on the system, saving the number of clusters n_clusters and the size distribution cluster_sizes. Take the following as a starting point: ```python for i in range(LOOPS): # Run cluster analysis cluster_structure.run_for_all_pairs() # Gather statistics: n_clusters.append(# &lt; excercise &gt;) for c in cluster_structure.clusters: cluster_sizes.append(# &lt; excercise &gt;) system.integrator.run(100) print("progress: {:3.0f}%".format((float(i)+1)/LOOPS * 100), end="\r") ``` ```python for i in range(LOOPS): # Run cluster analysis cluster_structure.run_for_all_pairs() # Gather statistics: n_clusters.append(len(cluster_structure.clusters)) for c in cluster_structure.clusters: cluster_sizes.append(c[1].size()) system.integrator.run(100) print("progress: {:3.0f}%".format((float(i) + 1) / LOOPS * 100), end="\r") ``` You may want to get a visualization of the current state of the system. For that we plot the particle positions folded to the simulation box using <tt>matplotlib</tt>. End of explanation """ plt.figure(figsize=(10, 10)) plt.grid() plt.xticks(range(0, 20)) plt.plot(size_dist[1][:-2], size_dist[0][:-1] / float(LOOPS)) plt.xlabel('size of clusters', fontsize=20) plt.ylabel('distribution', fontsize=20) plt.show() """ Explanation: In the plot chain-like and ring-like clusters should be visible. Some of them are connected via Y- or X-links to each other. Also some monomers should be present. Cluster distribution After having sampled our system we now can calculate estimators for the expectation value of the cluster sizes and their distribution. Exercise: Use numpy to calculate a histogram of the cluster sizes and assign it to the variable size_dist. Take only clusters up to a size of 19 particles into account. Hint: In order not to count clusters with size 20 or more, one may include an additional bin containing these. The reason for that is that numpy defines the histogram bins as half-open intervals with the open border at the upper bin edge. Consequently clusters of larger sizes are attributed to the last bin. By not using the last bin in the plot below, these clusters can effectively be neglected. python size_dist = np.histogram(cluster_sizes, range=(2, 21), bins=19) Now we can plot this histogram and should see an exponential decrease in the number of particles in a cluster along the size of a cluster, i.e. the number of monomers in it End of explanation """
tarashor/vibrations
py/notebooks/MatricesForPlaneCorrugatedShells1.ipynb
mit
from sympy import * from geom_util import * from sympy.vector import CoordSys3D import matplotlib.pyplot as plt import sys sys.path.append("../") %matplotlib inline %reload_ext autoreload %autoreload 2 %aimport geom_util # Any tweaks that normally go in .matplotlibrc, etc., should explicitly go here %config InlineBackend.figure_format='retina' plt.rcParams['figure.figsize'] = (12, 12) plt.rc('text', usetex=True) plt.rc('font', family='serif') # SMALL_SIZE = 42 # MEDIUM_SIZE = 42 # BIGGER_SIZE = 42 # plt.rc('font', size=SMALL_SIZE) # controls default text sizes # plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title # plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels # plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels # plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels # plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize # plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title init_printing() N = CoordSys3D('N') alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3", real = True, positive=True) """ Explanation: Matrix generation Init symbols for sympy End of explanation """ R, L, ga, gv = symbols("R L g_a g_v", real = True, positive=True) a1 = pi / 2 + (L / 2 - alpha1)/R a2 = 2 * pi * alpha1 / L x1 = (R + ga * cos(gv * a1)) * cos(a1) x2 = alpha2 x3 = (R + ga * cos(gv * a1)) * sin(a1) r = x1*N.i + x2*N.j + x3*N.k z = ga/R*gv*sin(gv*a1) w = 1 + ga/R*cos(gv*a1) dr1x=(z*cos(a1) + w*sin(a1)) dr1z=(z*sin(a1) - w*cos(a1)) r1 = dr1x*N.i + dr1z*N.k r2 =N.j mag=sqrt((w)**2+(z)**2) nx = -dr1z/mag nz = dr1x/mag n = nx*N.i+nz*N.k dnx=nx.diff(alpha1) dnz=nz.diff(alpha1) dn= dnx*N.i+dnz*N.k Ralpha = r+alpha3*n R1=r1+alpha3*dn R2=Ralpha.diff(alpha2) R3=n r1 R1a3x=-1/(mag**3)*(w*cos(a1) - z*sin(a1))*(-1/R*w+ga*gv*gv/(R*R)*cos(gv*a1))*z+(1/mag)*(1/R*w*sin(a1)+ga*gv*gv/(R*R)*cos(gv*a1)*sin(a1)+2/R*z*cos(a1)) R1a3x ddr=r1.diff(alpha1) cp=r1.cross(ddr) k=cp.magnitude()/(mag**3) k """ Explanation: Cylindrical coordinates End of explanation """ k=simplify(k) k q=(1/R*w+ga*gv*gv/(R*R)*cos(gv*a1)) f=q**2+4/(R*R)*z*z f=trigsimp(f) f f=expand(f) f trigsimp(f) q=(1/R*w+ga*gv*gv/(R*R)*cos(gv*a1)) f1=q*w+2/R*z*z f1=trigsimp(f1) f1 f1=expand(f1) f1 f1=trigsimp(f1) f1 R1a3x = trigsimp(R1a3x) R1a3x R1 R2 R3 """ Explanation: k=trigsimp(k) k End of explanation """ import plot %aimport plot x1 = Ralpha.dot(N.i) x3 = Ralpha.dot(N.k) alpha1_x = lambdify([R, L, ga, gv, alpha1, alpha3], x1, "numpy") alpha3_z = lambdify([R, L, ga, gv, alpha1, alpha3], x3, "numpy") R_num = 1/0.8 L_num = 2 h_num = 0.1 ga_num = h_num/3 gv_num = 20 x1_start = 0 x1_end = L_num x3_start = -h_num/2 x3_end = h_num/2 def alpha_to_x(a1, a2, a3): x=alpha1_x(R_num, L_num, ga_num, gv_num, a1, a3) z=alpha3_z(R_num, L_num, ga_num, gv_num, a1, a3) return x, 0, z plot.plot_init_geometry_2(x1_start, x1_end, x3_start, x3_end, alpha_to_x) %aimport plot R3_1=R3.dot(N.i) R3_3=R3.dot(N.k) R3_1_x = lambdify([R, L, ga, gv, alpha1, alpha3], R3_1, "numpy") R3_3_z = lambdify([R, L, ga, gv, alpha1, alpha3], R3_3, "numpy") def R3_to_x(a1, a2, a3): x=R3_1_x(R_num, L_num, ga_num, gv_num, a1, a3) z=R3_3_z(R_num, L_num, ga_num, gv_num, a1, a3) return x, 0, z plot.plot_vectors(x1_start, x1_end, 0, alpha_to_x, R3_to_x) %aimport plot R1_1=R1.dot(N.i) R1_3=R1.dot(N.k) R1_1_x = lambdify([R, L, ga, gv, alpha1, alpha3], R1_1, "numpy") R1_3_z = lambdify([R, L, ga, gv, alpha1, alpha3], R1_3, "numpy") def R1_to_x(a1, a2, a3): x=R1_1_x(R_num, L_num, ga_num, gv_num, a1, a3) z=R1_3_z(R_num, L_num, ga_num, gv_num, a1, a3) return x, 0, z plot.plot_vectors(x1_start, x1_end, h_num/2, alpha_to_x, R1_to_x) """ Explanation: Draw End of explanation """ H1 = sqrt((alpha3*((-(1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin((L/2 - alpha1)/R) - ga*gv*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos((L/2 - alpha1)/R)/R)*(-ga*gv*(1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R**2 + ga**2*gv**3*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R**3)/((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + ga**2*gv**2*sin(gv*(pi/2 + (L/2 - alpha1)/R))**2/R**2)**(3/2) + ((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*cos((L/2 - alpha1)/R)/R + ga*gv**2*cos((L/2 - alpha1)/R)*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R**2 - 2*ga*gv*sin((L/2 - alpha1)/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R**2)/sqrt((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + ga**2*gv**2*sin(gv*(pi/2 + (L/2 - alpha1)/R))**2/R**2)) + (1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*cos((L/2 - alpha1)/R) - ga*gv*sin((L/2 - alpha1)/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + (alpha3*(((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*cos((L/2 - alpha1)/R) - ga*gv*sin((L/2 - alpha1)/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R)*(-ga*gv*(1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin(gv*(pi/2 + (L/2 - alpha1)/R))/R**2 + ga**2*gv**3*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R**3)/((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + ga**2*gv**2*sin(gv*(pi/2 + (L/2 - alpha1)/R))**2/R**2)**(3/2) + ((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin((L/2 - alpha1)/R)/R + ga*gv**2*sin((L/2 - alpha1)/R)*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R**2 + 2*ga*gv*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos((L/2 - alpha1)/R)/R**2)/sqrt((1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)**2 + ga**2*gv**2*sin(gv*(pi/2 + (L/2 - alpha1)/R))**2/R**2)) + (1 + ga*cos(gv*(pi/2 + (L/2 - alpha1)/R))/R)*sin((L/2 - alpha1)/R) + ga*gv*sin(gv*(pi/2 + (L/2 - alpha1)/R))*cos((L/2 - alpha1)/R)/R)**2) H2=S(1) H3=S(1) H=[H1, H2, H3] DIM=3 dH = zeros(DIM,DIM) for i in range(DIM): dH[i,0]=H[i].diff(alpha1) dH[i,1]=H[i].diff(alpha2) dH[i,2]=H[i].diff(alpha3) trigsimp(H1) """ Explanation: Lame params End of explanation """ %aimport geom_util G_up = getMetricTensorUpLame(H1, H2, H3) """ Explanation: Metric tensor ${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$ End of explanation """ G_down = getMetricTensorDownLame(H1, H2, H3) """ Explanation: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$ End of explanation """ DIM=3 G_down_diff = MutableDenseNDimArray.zeros(DIM, DIM, DIM) for i in range(DIM): for j in range(DIM): for k in range(DIM): G_down_diff[i,i,k]=2*H[i]*dH[i,k] GK = getChristoffelSymbols2(G_up, G_down_diff, (alpha1, alpha2, alpha3)) """ Explanation: Christoffel symbols End of explanation """ def row_index_to_i_j_grad(i_row): return i_row // 3, i_row % 3 B = zeros(9, 12) B[0,1] = S(1) B[1,2] = S(1) B[2,3] = S(1) B[3,5] = S(1) B[4,6] = S(1) B[5,7] = S(1) B[6,9] = S(1) B[7,10] = S(1) B[8,11] = S(1) for row_index in range(9): i,j=row_index_to_i_j_grad(row_index) B[row_index, 0] = -GK[i,j,0] B[row_index, 4] = -GK[i,j,1] B[row_index, 8] = -GK[i,j,2] """ Explanation: Gradient of vector $ \left( \begin{array}{c} \nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \ \nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \ \nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \ \end{array} \right) = B \cdot \left( \begin{array}{c} u_1 \ \frac { \partial u_1 } { \partial \alpha_1} \ \frac { \partial u_1 } { \partial \alpha_2} \ \frac { \partial u_1 } { \partial \alpha_3} \ u_2 \ \frac { \partial u_2 } { \partial \alpha_1} \ \frac { \partial u_2 } { \partial \alpha_2} \ \frac { \partial u_2 } { \partial \alpha_3} \ u_3 \ \frac { \partial u_3 } { \partial \alpha_1} \ \frac { \partial u_3 } { \partial \alpha_2} \ \frac { \partial u_3 } { \partial \alpha_3} \ \end{array} \right) = B \cdot D \cdot \left( \begin{array}{c} u^1 \ \frac { \partial u^1 } { \partial \alpha_1} \ \frac { \partial u^1 } { \partial \alpha_2} \ \frac { \partial u^1 } { \partial \alpha_3} \ u^2 \ \frac { \partial u^2 } { \partial \alpha_1} \ \frac { \partial u^2 } { \partial \alpha_2} \ \frac { \partial u^2 } { \partial \alpha_3} \ u^3 \ \frac { \partial u^3 } { \partial \alpha_1} \ \frac { \partial u^3 } { \partial \alpha_2} \ \frac { \partial u^3 } { \partial \alpha_3} \ \end{array} \right) $ End of explanation """ E=zeros(6,9) E[0,0]=1 E[1,4]=1 E[2,8]=1 E[3,1]=1 E[3,3]=1 E[4,2]=1 E[4,6]=1 E[5,5]=1 E[5,7]=1 E def E_NonLinear(grad_u): N = 3 du = zeros(N, N) # print("===Deformations===") for i in range(N): for j in range(N): index = i*N+j du[j,i] = grad_u[index] # print("========") I = eye(3) a_values = S(1)/S(2) * du * G_up E_NL = zeros(6,9) E_NL[0,0] = a_values[0,0] E_NL[0,3] = a_values[0,1] E_NL[0,6] = a_values[0,2] E_NL[1,1] = a_values[1,0] E_NL[1,4] = a_values[1,1] E_NL[1,7] = a_values[1,2] E_NL[2,2] = a_values[2,0] E_NL[2,5] = a_values[2,1] E_NL[2,8] = a_values[2,2] E_NL[3,1] = 2*a_values[0,0] E_NL[3,4] = 2*a_values[0,1] E_NL[3,7] = 2*a_values[0,2] E_NL[4,0] = 2*a_values[2,0] E_NL[4,3] = 2*a_values[2,1] E_NL[4,6] = 2*a_values[2,2] E_NL[5,2] = 2*a_values[1,0] E_NL[5,5] = 2*a_values[1,1] E_NL[5,8] = 2*a_values[1,2] return E_NL %aimport geom_util u=getUHat3DPlane(alpha1, alpha2, alpha3) # u=getUHatU3Main(alpha1, alpha2, alpha3) gradu=B*u E_NL = E_NonLinear(gradu)*B """ Explanation: Strain tensor $ \left( \begin{array}{c} \varepsilon_{11} \ \varepsilon_{22} \ \varepsilon_{33} \ 2\varepsilon_{12} \ 2\varepsilon_{13} \ 2\varepsilon_{23} \ \end{array} \right) = \left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot \left( \begin{array}{c} \nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \ \nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \ \nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \ \end{array} \right)$ End of explanation """ P=zeros(12,12) P[0,0]=H[0] P[1,0]=dH[0,0] P[1,1]=H[0] P[2,0]=dH[0,1] P[2,2]=H[0] P[3,0]=dH[0,2] P[3,3]=H[0] P[4,4]=H[1] P[5,4]=dH[1,0] P[5,5]=H[1] P[6,4]=dH[1,1] P[6,6]=H[1] P[7,4]=dH[1,2] P[7,7]=H[1] P[8,8]=H[2] P[9,8]=dH[2,0] P[9,9]=H[2] P[10,8]=dH[2,1] P[10,10]=H[2] P[11,8]=dH[2,2] P[11,11]=H[2] P=simplify(P) P B_P = zeros(9,9) for i in range(3): for j in range(3): row_index = i*3+j B_P[row_index, row_index] = 1/(H[i]*H[j]) Grad_U_P = simplify(B_P*B*P) Grad_U_P StrainL=simplify(E*Grad_U_P) StrainL %aimport geom_util u=getUHatU3Main(alpha1, alpha2, alpha3) gradup=Grad_U_P*u E_NLp = E_NonLinear(gradup)*Grad_U_P simplify(E_NLp) """ Explanation: Physical coordinates $u_i=u_{[i]} H_i$ End of explanation """ T=zeros(12,6) T[0,0]=1 T[0,2]=alpha3 T[1,1]=1 T[1,3]=alpha3 T[3,2]=1 T[8,4]=1 T[9,5]=1 T D_p_T = StrainL*T simplify(D_p_T) u = Function("u") t = Function("theta") w = Function("w") u1=u(alpha1)+alpha3*t(alpha1) u3=w(alpha1) gu = zeros(12,1) gu[0] = u1 gu[1] = u1.diff(alpha1) gu[3] = u1.diff(alpha3) gu[8] = u3 gu[9] = u3.diff(alpha1) gradup=Grad_U_P*gu # o20=(K*u(alpha1)-w(alpha1).diff(alpha1)+t(alpha1))/2 # o21=K*t(alpha1) # O=1/2*o20*o20+alpha3*o20*o21-alpha3*K/2*o20*o20 # O=expand(O) # O=collect(O,alpha3) # simplify(O) StrainNL = E_NonLinear(gradup)*gradup simplify(StrainNL) """ Explanation: Tymoshenko theory $u_1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $ $u_2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $ $u_3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $ $ \left( \begin{array}{c} u_1 \ \frac { \partial u_1 } { \partial \alpha_1} \ \frac { \partial u_1 } { \partial \alpha_2} \ \frac { \partial u_1 } { \partial \alpha_3} \ u_2 \ \frac { \partial u_2 } { \partial \alpha_1} \ \frac { \partial u_2 } { \partial \alpha_2} \ \frac { \partial u_2 } { \partial \alpha_3} \ u_3 \ \frac { \partial u_3 } { \partial \alpha_1} \ \frac { \partial u_3 } { \partial \alpha_2} \ \frac { \partial u_3 } { \partial \alpha_3} \ \end{array} \right) = T \cdot \left( \begin{array}{c} u \ \frac { \partial u } { \partial \alpha_1} \ \gamma \ \frac { \partial \gamma } { \partial \alpha_1} \ w \ \frac { \partial w } { \partial \alpha_1} \ \end{array} \right) $ End of explanation """ L=zeros(12,12) h=Symbol('h') p0=1/2-alpha3/h p1=1/2+alpha3/h p2=1-(2*alpha3/h)**2 L[0,0]=p0 L[0,2]=p1 L[0,4]=p2 L[1,1]=p0 L[1,3]=p1 L[1,5]=p2 L[3,0]=p0.diff(alpha3) L[3,2]=p1.diff(alpha3) L[3,4]=p2.diff(alpha3) L[8,6]=p0 L[8,8]=p1 L[8,10]=p2 L[9,7]=p0 L[9,9]=p1 L[9,11]=p2 L[11,6]=p0.diff(alpha3) L[11,8]=p1.diff(alpha3) L[11,10]=p2.diff(alpha3) L D_p_L = StrainL*L simplify(D_p_L) h = 0.5 exp=(0.5-alpha3/h)*(1-(2*alpha3/h)**2)#/(1+alpha3*0.8) p02=integrate(exp, (alpha3, -h/2, h/2)) integral = expand(simplify(p02)) integral """ Explanation: Square theory $u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $ $u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $ $u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $ $ \left( \begin{array}{c} u^1 \ \frac { \partial u^1 } { \partial \alpha_1} \ \frac { \partial u^1 } { \partial \alpha_2} \ \frac { \partial u^1 } { \partial \alpha_3} \ u^2 \ \frac { \partial u^2 } { \partial \alpha_1} \ \frac { \partial u^2 } { \partial \alpha_2} \ \frac { \partial u^2 } { \partial \alpha_3} \ u^3 \ \frac { \partial u^3 } { \partial \alpha_1} \ \frac { \partial u^3 } { \partial \alpha_2} \ \frac { \partial u^3 } { \partial \alpha_3} \ \end{array} \right) = L \cdot \left( \begin{array}{c} u_{10} \ \frac { \partial u_{10} } { \partial \alpha_1} \ u_{11} \ \frac { \partial u_{11} } { \partial \alpha_1} \ u_{12} \ \frac { \partial u_{12} } { \partial \alpha_1} \ u_{30} \ \frac { \partial u_{30} } { \partial \alpha_1} \ u_{31} \ \frac { \partial u_{31} } { \partial \alpha_1} \ u_{32} \ \frac { \partial u_{32} } { \partial \alpha_1} \ \end{array} \right) $ End of explanation """ rho=Symbol('rho') B_h=zeros(3,12) B_h[0,0]=1 B_h[1,4]=1 B_h[2,8]=1 M=simplify(rho*P.T*B_h.T*G_up*B_h*P) M M_p = L.T*M*L*(1+alpha3/R) mass_matr = simplify(integrate(M_p, (alpha3, -h/2, h/2))) mass_matr """ Explanation: Mass matrix End of explanation """
mne-tools/mne-tools.github.io
0.19/_downloads/f760cc2f1a5d6c625b1e14a0b05176dd/plot_ecog.ipynb
bsd-3-clause
# Authors: Eric Larson <larson.eric.d@gmail.com> # Chris Holdgraf <choldgraf@gmail.com> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt from scipy.io import loadmat import mne from mne.viz import plot_alignment, snapshot_brain_montage print(__doc__) """ Explanation: Working with ECoG data MNE supports working with more than just MEG and EEG data. Here we show some of the functions that can be used to facilitate working with electrocorticography (ECoG) data. End of explanation """ mat = loadmat(mne.datasets.misc.data_path() + '/ecog/sample_ecog.mat') ch_names = mat['ch_names'].tolist() elec = mat['elec'] # electrode positions given in meters # Now we make a montage stating that the sEEG contacts are in head # coordinate system (although they are in MRI). This is compensated # by the fact that below we do not specicty a trans file so the Head<->MRI # transform is the identity. montage = mne.channels.make_dig_montage(ch_pos=dict(zip(ch_names, elec)), coord_frame='head') print('Created %s channel positions' % len(ch_names)) """ Explanation: Let's load some ECoG electrode locations and names, and turn them into a :class:mne.channels.DigMontage class. End of explanation """ info = mne.create_info(ch_names, 1000., 'ecog', montage=montage) """ Explanation: Now that we have our electrode positions in MRI coordinates, we can create our measurement info structure. End of explanation """ subjects_dir = mne.datasets.sample.data_path() + '/subjects' fig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir, surfaces=['pial']) mne.viz.set_3d_view(fig, 200, 70) """ Explanation: We can then plot the locations of our electrodes on our subject's brain. <div class="alert alert-info"><h4>Note</h4><p>These are not real electrodes for this subject, so they do not align to the cortical surface perfectly.</p></div> End of explanation """ # We'll once again plot the surface, then take a snapshot. fig_scatter = plot_alignment(info, subject='sample', subjects_dir=subjects_dir, surfaces='pial') mne.viz.set_3d_view(fig_scatter, 200, 70) xy, im = snapshot_brain_montage(fig_scatter, montage) # Convert from a dictionary to array to plot xy_pts = np.vstack([xy[ch] for ch in info['ch_names']]) # Define an arbitrary "activity" pattern for viz activity = np.linspace(100, 200, xy_pts.shape[0]) # This allows us to use matplotlib to create arbitrary 2d scatterplots _, ax = plt.subplots(figsize=(10, 10)) ax.imshow(im) ax.scatter(*xy_pts.T, c=activity, s=200, cmap='coolwarm') ax.set_axis_off() plt.show() """ Explanation: Sometimes it is useful to make a scatterplot for the current figure view. This is best accomplished with matplotlib. We can capture an image of the current mayavi view, along with the xy position of each electrode, with the snapshot_brain_montage function. End of explanation """
rustychris/stompy
examples/filtering.ipynb
mit
from stompy import filters, utils import matplotlib.pyplot as plt import numpy as np %matplotlib notebook # Sample data -- all times in hours dt=0.1 x=np.arange(0,100,dt) y=np.random.random(len(x)) target_cutoff=36.0 y_fir=filters.lowpass_fir(y,int(target_cutoff/dt)) y_iir=filters.lowpass(y,dt=dt,cutoff=target_cutoff/2.) y_godin=filters.lowpass_godin(y,in_t_days=x/24.) """ Explanation: Comparison of Filtering Methods This notebook tests/demonstrates three lowpass filtering methods. The typical use is for time series filtering, but there is no reason these methods cannot be used to filter coordinate series, or the lowpass result subtracted from the original signal to get a highpass series. The text here is high level, and belies a dim recollection of proper signal processing. DSP experts will be offended, but they are not the intended audience. FIR Finite impulse response, which is a glorified moving average. Compared to a standard "boxcar" moving average, we choose a Hanning window for a smoother response. This method deals well with NaNs (NaNs should be left in so that the input has an evenly spaced timebase). The cutoff period is where the frequency response falls to 0.5 of the DC response. A tidal filter should probably have a cutoff of about 72 hours to be sure that very little diurnal signal gets through. IIR Infinite impulse response, where each output point is calculated as a weighted sum of recent output points and recent input points. This method allows for fast filtering and sharply defined frequency responses. The "order" of the method defines how many "recent" inputs and outputs are considered. Higher order allows for sharper cutoffs between pass frequencies and stop frequencies, at the expense of possible numerical stability issues. Note that the cutoff period for the IIR method here is not the same as for FIR. The response falls to 0.5 at twice the cutoff period. A tidal filter can reasonably have a cutoff of 36 hours, which means that very little energy gets through at 36 hours, and only half of the energy at 72 hours get through. For example, an FIR filter with a cutoff at 36 hours will still pass half of the input signal at a 36-hour period. The IIR code would require a "cutoff" of 18 hours to get the same half-pass effect at 36 hours. This may change in the future, but will require a new function since there is code that depends on the current implementation. Godin This is an old-school moving average filter for removing tides from time series. It is intended to be applied to hourly data, though the implementation here will approximate a Godin filter on time series with arbitrary (but constant!) time steps. All of the methods preserve the length of the input data, but generally produce unusable results near the start and end of the output. End of explanation """ fig,ax=plt.subplots() ax.plot(x,y,label='Original',lw=0.2) ax.plot(x,y_fir,label='FIR') ax.plot(x,y_iir,label='IIR') ax.plot(x,y_godin,label='Godin') ax.legend(loc='upper right') """ Explanation: Construct a noise signal and plot the result of applying each method. End of explanation """ periods=10**(np.linspace(np.log10(1),np.log10(400),150)) freqs=1./periods # A single time base that's good enough for the full range x=np.arange(0,4*periods[-1],periods[0]/4.) dt=np.median(np.diff(x)) target_cutoff=36.0 freq=freqs[0] y=np.cos(2*np.pi*freq*x) win=np.hanning(len(y)) def fir36hour(y): return filters.lowpass_fir(y,int(2*target_cutoff/dt)) def iir36hour(y): return filters.lowpass(y,dt=dt,cutoff=target_cutoff,order=4) def godin(y): return filters.lowpass_godin(y,in_t_days=x/24.) def scan(f): gains=[] for freq in freqs: y=np.cos(2*np.pi*freq*x) y_filt=f(y) mag_in=utils.rms( win*y ) mag_out=utils.rms( win*y_filt) gains.append( (freq,mag_out/mag_in) ) return np.array(gains) fir_gains=scan(fir36hour) iir_gains=scan(iir36hour) godin_gains=scan(godin) fig,ax=plt.subplots() ax.loglog(iir_gains[:,0],iir_gains[:,1],label='IIR 4th order') ax.loglog(fir_gains[:,0],fir_gains[:,1],label='FIR') ax.loglog(godin_gains[:,0],godin_gains[:,1],label='Godin') ax.axvline(1./target_cutoff,label='36h',color='k',lw=0.8,zorder=-1) ax.axvline(1./24,label='__nolabel__',color='0.6',lw=0.8,zorder=-1) ax.axvline(1./24.84,label='__nolabel__',color='0.6',lw=0.8,zorder=-1) ax.axvline(1./12,label='__nolabel__',color='0.6',lw=0.8,zorder=-1) ax.axvline(1./12.42,label='__nolabel__',color='0.6',lw=0.8,zorder=-1) ax.axhline(0.5,label='__nolabel__',color='k',lw=0.8,zorder=-1) ax.set_xlabel('Freq (1/h)') ax.set_ylabel('Gain') ax.legend(loc='lower left') """ Explanation: Frequency Response This is a brute-force approach to frequency response to demonstrate the details of what each method does to incoming frequencies. Each filter is applied to a collection of sine-curve inputs of varying frequencies. For each frequency, the gain is computed by comparing the RMS magnitude of the input and output waveforms. End of explanation """
espressomd/espresso
doc/tutorials/error_analysis/error_analysis_part2.ipynb
gpl-3.0
import numpy as np %matplotlib inline import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 18}) import sys import logging logging.basicConfig(level=logging.INFO, stream=sys.stdout) np.random.seed(43) def ar_1_process(n_samples, c, phi, eps): ''' Generate a correlated random sequence with the AR(1) process. Parameters ---------- n_samples: :obj:`int` Sample size. c: :obj:`float` Constant term. phi: :obj:`float` Correlation magnitude. eps: :obj:`float` Shock magnitude. ''' ys = np.zeros(n_samples) if abs(phi) >= 1: raise ValueError("abs(phi) must be smaller than 1.") # draw initial value from normal distribution with known mean and variance ys[0] = np.random.normal(loc=c / (1 - phi), scale=np.sqrt(eps**2 / (1 - phi**2))) for i in range(1, n_samples): ys[i] = c + phi * ys[i - 1] + np.random.normal(loc=0., scale=eps) return ys # generate simulation data using the AR(1) process logging.info("Generating data sets for the tutorial ...") N_SAMPLES = 100000 C_1 = 2.0 PHI_1 = 0.85 EPS_1 = 2.0 time_series_1 = ar_1_process(N_SAMPLES, C_1, PHI_1, EPS_1) C_2 = 0.05 PHI_2 = 0.999 EPS_2 = 1.0 time_series_2 = ar_1_process(N_SAMPLES, C_2, PHI_2, EPS_2) logging.info("Done") fig = plt.figure(figsize=(10, 6)) plt.title("The first 1000 samples of both time series") plt.plot(time_series_1[0:1000], label="time series 1") plt.plot(time_series_2[0:1000], label="time series 2") plt.xlabel("$i$") plt.ylabel("$X_i$") plt.legend() plt.show() """ Explanation: Tutorial: Error Estimation - Part 2 (Autocorrelation Analysis) Table of contents Data generation Introduction Computing the auto-covariance function Autocorrelation time References Data generation This first code cell will provide us with the same two data sets as in the previous part of this tutorial. We will use them to get familiar with the autocorrelation analysis method of error estimation. End of explanation """ # Numpy solution time_series_1_centered = time_series_1 - np.average(time_series_1) autocov = np.empty(1000) for j in range(1000): autocov[j] = np.dot(time_series_1_centered[:N_SAMPLES - j], time_series_1_centered[j:]) autocov /= N_SAMPLES fig = plt.figure(figsize=(10, 6)) plt.gca().axhline(0, color="gray", linewidth=1) plt.plot(autocov) plt.xlabel("lag time $j$") plt.ylabel("$\hat{R}^{XX}_j$") plt.show() """ Explanation: Introduction In the first part of the error analysis tutorial we have introduced the binning analysis, an easy and common tool for error estimation. However, we have seen that it failed to deliver an estimate for our second data set. In this tutorial, we will get to know a different method: the autocorrelation analysis, sometimes also called auto covariance analysis. It not only delivers an estimate for the standard error of the mean (SEM), but also information on the correlations and the optimal sampling rate. Before we start computing anything, we will give a brief overview over the relevant quantities and how they relate to each other. This outlines how one would go about computing these quantities. The end goal of this process is to define an estimator for the standard error of the mean $\sigma_\overline{X}$. And if the data allows for it, it can be calculated. If it fails, autocorrelation analysis provides more insight into the causes of the failure than the binning analysis from the first part of this tutorial. Albeit being more involved, it can provide a valuable tool for systems with difficult statistics. Let us begin the theory by defining the auto-covariance function $R^{XX}(\tau)$ of an observable $X$, at lag time $\tau$: $ \begin{align} R^{XX}(\tau) &\equiv \langle (X(t)-\langle X \rangle)(X(t+\tau)-\langle X \rangle) \rangle \ &= \langle X(t) X(t+\tau) \rangle - \langle X \rangle^2, \tag{1} \end{align} $ where $\langle \dots \rangle$ denotes the ensemble average of the expression inside the angled brackets — e.g. $\langle X \rangle$ is the true mean value of the observable $X$. In the previous part we have established an understanding of correlations as being the "similarity" of successive samples. This is an intuitive but inaccurate understanding. The auto-covariance function provides a means to measure and quantify correlation. Computing the auto-covariance for $\tau=0$ yields the variance $\sigma=\langle X^2 \rangle - \langle X \rangle^2$. Normalizing the auto-covariance function by the variance yields the autocorrelation function (ACF) $ \begin{align} A^{XX}(\tau) = \frac{R^{XX}(\tau)}{R^{XX}(0)} = \frac{\langle X(t) X(t+\tau) \rangle - \langle X \rangle^2}{\langle X^2 \rangle - \langle X \rangle^2}. \tag{2} \end{align} $ The ACF can be used to estimate the correlation time $\tau_X$. Often, this can be simply done by fitting an exponential function to $A^{XX}$, from which we extract $\tau_{X, \mathrm{exp}}$ as the inverse decay rate. However, the ACF doesn't necessarily assume the shape of an exponential. That is when another quantity, called the integrated autocorrelation time $ \begin{align} \tau_{X, \mathrm{int}} \equiv \int_0^\infty A^{XX}(\tau) \mathrm{d}\tau \tag{3} \end{align} $ comes into play. Those two correlation times $\tau_{X, \mathrm{int}}$ and $\tau_{X, \mathrm{exp}}$ are identical for exponential ACFs, but if the ACF isn't exponential, $\tau_{X, \mathrm{int}}$ is the only meaningful quantity. It is related to the effective number of samples $ \begin{align} N_\mathrm{eff} = \frac{N}{2 \tau_{X, \mathrm{int}}} \tag{4} \end{align} $ and also to the standard error of the mean (SEM) $ \begin{align} \sigma_\overline{X} = \sqrt{\frac{2 \sigma_X^2 \tau_{X, \mathrm{int}}}{N}} = \sqrt{\frac{\sigma_X^2}{N_\mathrm{eff}}}. \tag{5} \end{align} $ where $\sigma_X^2 = \langle X^2 \rangle-\langle X \rangle ^2$ is the variance of the observable $X$. Computing the auto-covariance function Equations (1) and (2) involve an infinite, continuous time series $X(t)$. In the simulation world however, we work with finite, discrete time series. These limitations dictate how we can estimate the true (unknown) autocorrelation function. For a finite, time-discrete set of samples $X_i$, a commonly used estimator is the following expression $ \begin{align} \hat{R}^{XX}j = \frac{1}{N} \sum^{N-|j|}{i=1}(X_i-\overline{X})(X_{i+|j|}-\overline{X}), \tag{6} \end{align} $ where $N$ is the total number of samples, and $\overline{X}=\frac{1}{N}\sum_{i=1}^N X_i$ is the average of all samples. This estimates the auto-covariance function at lag time $\tau=j\Delta t$ where $\Delta t$ is the time separation between samples. Before we continue, we want to notify the reader about a few subtleties regarding this estimator: * Ideally, we would use $\langle X \rangle$ instead of $\overline{X}$, since the latter is only an estimate of the former. In most cases we don't know $\langle X \rangle$, thus we introduce a small unknown bias by using the estimated mean $\overline{X}$ instead. * Actually, the sum does not contain $N$ terms, but $N-|j|$ terms. Consequently, we should divide the whole sum by $N-|j|$ and not by $N$. In fact, this approach yields a different estimator to the auto-covariance function (the so-called unbiased estimator). However, for large $N$ and small $j$, both estimators yield similar results. This is why the simpler $N$ is commonly used anyway. Exercise Compute the auto-covariance function of the data in time_series_1 using the estimator in equation (6) and store it into a numpy array called autocov. Compute it for all $j$ from 0 up to 999. Plot it against $j$. ```python naive Python solution autocov = np.zeros(300) avg = np.average(time_series_1) for j in range(300): temp = 0. for i in range(N_SAMPLES - j): temp += (time_series_1[i] - avg) * (time_series_1[i + j] - avg) autocov[j] = temp / N_SAMPLES fig = plt.figure(figsize=(10, 6)) plt.plot(autocov) plt.xlabel("lag time $j$") plt.ylabel("$\hat{R}^{XX}_j$") plt.show() ``` Depending on your implementation, this computation might have taken a significant amount of time (up to a couple tens of seconds). When doing a lot of these computations, using highly optimized routines for numerics can save a lot of time. The following example shows how to utilize the common Numpy package to do the job quicker. End of explanation """ from scipy.optimize import curve_fit def exp_fnc(x, a, b): return a * np.exp(-x / b) N_MAX = 1000 j = np.arange(1, N_MAX) j_log = np.logspace(0, 3, 100) popt, pcov = curve_fit(exp_fnc, j, autocov[1:N_MAX], p0=[15, 10]) # compute analytical ACF of AR(1) process AN_SIGMA_1 = np.sqrt(EPS_1 ** 2 / (1 - PHI_1 ** 2)) AN_TAU_EXP_1 = -1 / np.log(PHI_1) an_acf_1 = AN_SIGMA_1**2 * np.exp(-j / AN_TAU_EXP_1) fig = plt.figure(figsize=(10, 6)) plt.plot(j, autocov[1:N_MAX], "x", label="numerical ACF") plt.plot(j, an_acf_1, "-.", linewidth=3, label="analytical ACF") plt.plot(j_log, exp_fnc(j_log, popt[0], popt[1]), label="exponential fit") plt.xlim((1, N_MAX)) plt.xscale("log") plt.xlabel("lag time $j$") plt.ylabel("$\hat{R}^{XX}_j$") plt.legend() plt.show() print(f"Exponential autocorrelation time: {popt[1]:.2f} sampling intervals") """ Explanation: We can see that the auto-covariance function starts at a high value and decreases quickly into a long noisy tail which fluctuates around zero. The high values at short lag times indicate that there are strong correlations at short time scales, as expected. However, even though the tail looks uninteresting, it can bear important information about the statistics of your data. Small systematic deviations from 0 in the tail can be a hint that long-term correlations exist in your system. On the other hand, if there is no sign of a systematic deviation from 0 in the tail, this usually means that the correlation is decaying well within the simulation time, and that the statistics are good enough to estimate an error. In the above example, the correlation quickly decays to zero. Despite the noise in the tail, the statistics seem very reasonable. Autocorrelation time Continuing our example, we can zoom into the first part of the auto-covariance function (using a log scale). We see that it indeed does have similarities with an exponential decay curve. In general, it isn't an exponential, but often can be approximated using one. If it matches reasonably well, the inverted prefactor in the exponential can be directly used as the correlation time, which is a measure for how many sampling intervals it takes for correlations to decay. Execute the following code cell for an illustration. End of explanation """ # compute the ACF acf = autocov / autocov[0] # integrate the ACF (suffix _v for vectors) j_max_v = np.arange(1000) tau_int_v = np.zeros(1000) for j_max in j_max_v: tau_int_v[j_max] = 0.5 + np.sum(acf[1:j_max + 1]) # plot fig = plt.figure(figsize=(10, 6)) plt.plot(j_max_v[1:], tau_int_v[1:], label="numerical summing") plt.plot(j_max_v[(1, -1),], np.repeat(AN_TAU_EXP_1, 2), "-.", label="analytical") plt.xscale("log") plt.xlabel(r"sum length $j_\mathrm{max}$") plt.ylabel(r"$\hat{\tau}_{X, \mathrm{int}}$") plt.legend() plt.show() """ Explanation: Since the auto-covariance function is very well matched with an exponential, this analysis already gives us a reasonable estimate of the autocorrelation time. Here we have the luxury to have an analytical ACF at hand which describes the statistics of the simple AR(1) process, which generated our simulation data. It is in fact exponential and agrees very well with the numerical ACF. In practice, however, you will neither know an analytical ACF, nor know if the ACF is exponential, at all. In many systems, the ACF is more or less exponential, but this is not necessarily the case. For the sake of completeness, we also want to compute the integrated correlation time. This technique must be applied when the ACF is not exponential. For that purpose, we first need to normalize the auto-covariance function in order to get the autocorrelation function (as opposed to auto-covariance function), and then integrate over it. The integration in equation (3) is again approximated as a discrete sum over the first $j_\mathrm{max}$ values of the ACF (except $\hat{A}^{XX}_0$, which is always 1): $ \begin{align} \hat{\tau}{X, \mathrm{int}} = \frac{1}{2} + \sum{j=1}^{j_\mathrm{max}} \hat{A}^{XX}_j \tag{7} \end{align} $ where $\hat{A}^{XX}j = \hat{R}^{XX}_j / \hat{R}^{XX}_0$ is the estimated ACF. The sum is evaluated up to a maximum number of terms $j\mathrm{max}$. This maximum number of terms is a crucial parameter. In the following code cell, $\hat{\tau}{X, \mathrm{int}}$ is plotted over $j\mathrm{max}$. End of explanation """ C = 5.0 # determine j_max j_max = 0 while j_max < C * tau_int_v[j_max]: j_max += 1 # plot fig = plt.figure(figsize=(10, 6)) plt.plot(j_max_v[1:], C * tau_int_v[1:]) plt.plot(j_max_v[1:], j_max_v[1:]) plt.plot([j_max], [C * tau_int_v[j_max]], "ro") plt.xscale("log") plt.ylim((0, 50)) plt.xlabel(r"sum length $j_\mathrm{max}$") plt.ylabel(r"$C \times \hat{\tau}_{X, \mathrm{int}}$") plt.show() print(f"j_max = {j_max}") """ Explanation: In this plot, we have the analytical solution at hand, which is a luxury not present in real applications. For the analysis, we therefore need to act as if there was no analytic solution: We see that the integrated autocorrelation time seems to quickly reach a plateau at a $j_\mathrm{max}$ of around 20. Further summation over the noisy tail of the ACF results in a random-walky behaviour. And for even larger $j_\mathrm{max}$, the small unknown bias of the ACF starts to accumulate, which is clearly unwanted. Thus, we have to find a good point to cut off the sum. There are several ways to determine a reasonable value for $j_\mathrm{max}$. Here, we demonstrate the one by A. Sokal <a href='#[1]'>[1]</a>, who states that it performs well if there are at least 1000 samples in the time series. We take the smallest $j_\mathrm{max}$, for which the following inequality holds: $ j_\mathrm{max} \geq C \times \hat{\tau}{X, \mathrm{int}}(j\mathrm{max}) \tag{8} $ where $C$ is a constant of about 5, or higher if convergence of $\hat{\tau}{X, \mathrm{int}}$ is slower than an exponential (up to $C=10$). In the following code cell, we plot the left side against the right side, and determine $j\mathrm{max}$. End of explanation """ tau_int = tau_int_v[j_max] print(f"Integrated autocorrelation time: {tau_int:.2f} time steps\n") N_eff = N_SAMPLES / (2 * tau_int) print(f"Original number of samples: {N_SAMPLES}") print(f"Effective number of samples: {N_eff:.1f}") print(f"Ratio: {N_eff / N_SAMPLES:.3f}\n") sem = np.sqrt(autocov[0] / N_eff) print(f"Standard error of the mean: {sem:.4f}") """ Explanation: Using this value of $j_\mathrm{max}$, we can calculate the integrated autocorrelation time $\hat{\tau}_{X, \mathrm{int}}$ and estimate the SEM with equation (5). End of explanation """
ctralie/TUMTopoTimeSeries2016
3DShapes.ipynb
apache-2.0
import numpy as np %matplotlib notebook import scipy.io as sio from scipy import sparse import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import sys sys.path.append("pyhks") from HKS import * from GeomUtils import * from ripser import ripser from persim import plot_diagrams, wasserstein from sklearn.manifold import MDS from sklearn.decomposition import PCA import warnings warnings.filterwarnings('ignore') """ Explanation: 3D Shape Classification with Sublevelset Filtrations In this module, we will explore how TDA can be used to classify 3D shapes. We will begine by clustering triangle meshes of humans in different poses by pose. We will then explore how to cluster a collection of shapes which are undergoing nonrigid transformations, or "articulations." As always, let's first import all of the necessary libraries. End of explanation """ def do0DSublevelsetFiltrationMesh(VPos, ITris, fn): x = fn(VPos, ITris) N = VPos.shape[0] # Add edges between adjacent points in the mesh I, J = getEdges(VPos, ITris) V = np.maximum(x[I], x[J]) # Add vertex birth times along the diagonal of the distance matrix I = np.concatenate((I, np.arange(N))) J = np.concatenate((J, np.arange(N))) V = np.concatenate((V, x)) #Create the sparse distance matrix D = sparse.coo_matrix((V, (I, J)), shape=(N, N)).tocsr() return ripser(D, distance_matrix=True, maxdim=0)['dgms'][0] """ Explanation: Now, let's include some code that performs a sublevelset filtration by some scalar function on the vertices of a triangle mesh. End of explanation """ def plotPCfn(VPos, fn, cmap = 'afmhot'): """ plot an XY slice of a mesh with the scalar function used in a sublevelset filtration """ x = fn - np.min(fn) x = x/np.max(x) c = plt.get_cmap(cmap) C = c(np.array(np.round(x*255.0), dtype=np.int64)) plt.scatter(VPos[:, 0], VPos[:, 1], 10, c=C) plt.axis('equal') ax = plt.gca() ax.set_facecolor((0.3, 0.3, 0.3)) """ Explanation: Let's also define a function which will plot a particular scalar function on XY and XZ slices of the mesh End of explanation """ subjectNum = 1 poseNum = 9 i = subjectNum*10 + poseNum fn = lambda VPos, ITris: VPos[:, 1] #Return the y coordinate as a function (VPos, _, ITris) = loadOffFile("shapes/tr_reg_%.03d.off"%i) x = fn(VPos, ITris) I = do0DSublevelsetFiltrationMesh(VPos, ITris, fn) plt.figure(figsize=(10, 4)) plt.subplot(131) plotPCfn(VPos, x, cmap = 'afmhot') plt.title("Subject %i Pose %i"%(subjectNum, poseNum)) plt.subplot(132) plotPCfn(VPos[:, [2, 1, 0]], x, cmap = 'afmhot') plt.subplot(133) plot_diagrams([I]) plt.show() """ Explanation: Experiment 1: Clustering of Human Poses In the first experiment, we will load surfaces of 10 different people, each performing one of 10 different poses, for 100 total. To classify by pose, we will use the height function as our sublevelset function. Let's load a few examples to see what they look like. The code below loads in all of the triangle meshes in the "shapes" directory Questions After looking at some examples, why would filtering by height be a good idea for picking up on these poses? End of explanation """ meshes = [] for poseNum in range(10): for subjectNum in range(10): i = subjectNum*10 + poseNum VPos, _, ITris = loadOffFile("shapes/tr_reg_%.03d.off"%i) meshes.append((VPos, ITris)) """ Explanation: Now let's load in all of the meshes and sort them so that contiguous groups of 10 meshes are the same pose (by default they are sorted by subject). End of explanation """ dgms = [] N = len(meshes) print("Computing persistence diagrams...") for i, (VPos, ITris) in enumerate(meshes): x = fn(VPos, ITris) I = do0DSublevelsetFiltrationMesh(VPos, ITris, fn) I = I[np.isfinite(I[:, 1]), :] dgms.append(I) # Compute Wasserstein distances in order of pose DWass = np.zeros((N, N)) for i in range(N): if i%10 == 0: print("Comparing pose %i..."%(i/10)) for j in range(i+1, N): DWass[i, j] = wasserstein(dgms[i], dgms[j]) DWass = DWass + DWass.T # Re-sort by class # Now do MDS and PCA, respectively mds = MDS(n_components=3, dissimilarity='precomputed') mds.fit_transform(DWass) XWass = mds.embedding_ plt.figure(figsize=(8, 4)) plt.subplot(121) plt.imshow(DWass, cmap = 'afmhot', interpolation = 'none') plt.title("Wasserstein") ax1 = plt.gca() ax2 = plt.subplot(122, projection='3d') ax2.set_title("Wasserstein By Pose") for i in range(10): X = XWass[i*10:(i+1)*10, :] ax2.scatter(X[:, 0], X[:, 1], X[:, 2]) Is = (i*10 + np.arange(10)).tolist() + (-2*np.ones(10)).tolist() Js = (-2*np.ones(10)).tolist() + (i*10 + np.arange(10)).tolist() ax1.scatter(Is, Js, 10) plt.show() """ Explanation: Finally, we compute the 0D sublevelset filtration on all of the shapes, followed by a Wasserstein distance computation between all pairs to examine how different shapes cluster together. We also display the result of 3D multidimensional scaling using the matrix of all pairs of Wasserstein distances. Questions Look at the pairwise Wasserstein distances and the corresponding 3D MDS plot. Which pose classes are similar to each other by our metric? Can you go back above and pull out example poses from different subjects that show why this might be the case? End of explanation """ classNum = 0 articulationNum = 1 classes = ['ant', 'hand', 'human', 'octopus', 'pliers', 'snake', 'shark', 'bear', 'chair'] i = classNum*10 + articulationNum fn = lambda VPos, ITris: -getHKS(VPos, ITris, 20, t = 30) (VPos, _, ITris) = loadOffFile("shapes_nonrigid/%.3d.off"%i) x = fn(VPos, ITris) I = do0DSublevelsetFiltrationMesh(VPos, ITris, fn) plt.figure(figsize=(8, 8)) plt.subplot(221) plotPCfn(VPos, x, cmap = 'afmhot') plt.title("Class %i Articulation %i"%(classNum, articulationNum)) plt.subplot(222) plotPCfn(VPos[:, [2, 1, 0]], x, cmap = 'afmhot') plt.subplot(223) plotPCfn(VPos[:, [0, 2, 1]], x, cmap = 'afmhot') plt.subplot(224) plot_diagrams([I]) plt.show() """ Explanation: Experiment 2: Clustering of Nonrigid Shapes In this experiment, we will use a different sublevelset which is blind to <i>intrinsic isometries</i>. This can be used to cluster shapes in a way which is invariant to articulated poses, which is complementary to the previous approach. As our scalar function will use the "heat kernel signature," which is a numerically stable way to compute curvature at multiple scales. We will actually negate this signature, since we care more about local maxes than local mins in the scalar function. So sublevelsets will start at regions of high curvature. Let's explore a few examples below in a dataset which is a subset of the McGill 3D Shape Benchmark with 10 shapes in 10 different articulations. In particular, we will load all of the shapes from the "shapes_nonrigid" folder within the TDALabs folder. Run the code and change the "classNum" and "articulationNum" variables to explore different shapes Questions Does it seem like the persistence diagrams stay mostly the same within each class? If so, why? End of explanation """ N = 90 meshesNonrigid = [] for i in range(N): (VPos, _, ITris) = loadOffFile("shapes_nonrigid/%.3d.off"%i) meshesNonrigid.append((VPos, ITris)) dgmsNonrigid = [] N = len(meshesNonrigid) print("Computing persistence diagrams...") for i, (VPos, ITris) in enumerate(meshesNonrigid): if i%10 == 0: print("Finished first %i meshes"%i) x = fn(VPos, ITris) I = do0DSublevelsetFiltrationMesh(VPos, ITris, lambda VPos, ITris: -getHKS(VPos, ITris, 20, t = 30)) I = I[np.isfinite(I[:, 1]), :] dgmsNonrigid.append(I) # Compute Wasserstein distances print("Computing Wasserstein distances...") DWassNonrigid = np.zeros((N, N)) for i in range(N): if i%10 == 0: print("Finished first %i distances"%i) for j in range(i+1, N): DWassNonrigid[i, j] = wasserstein(dgmsNonrigid[i], dgmsNonrigid[j]) DWassNonrigid = DWassNonrigid + DWassNonrigid.T # Now do MDS and PCA, respectively mds = MDS(n_components=3, dissimilarity='precomputed') mds.fit_transform(DWassNonrigid) XWassNonrigid = mds.embedding_ """ Explanation: Let's now load in a few of the nonrigid meshes and compute the sublevelset function of their heat kernel signatures End of explanation """ plt.figure(figsize=(8, 4)) plt.subplot(121) plt.imshow(DWassNonrigid, cmap = 'afmhot', interpolation = 'none') ax1 = plt.gca() plt.xticks(5+10*np.arange(10), classes, rotation='vertical') plt.yticks(5+10*np.arange(10), classes) plt.title("Wasserstein Distances") ax2 = plt.subplot(122, projection='3d') ax2.set_title("3D MDS") for i in range(9): X = XWassNonrigid[i*10:(i+1)*10, :] ax2.scatter(X[:, 0], X[:, 1], X[:, 2]) Is = (i*10 + np.arange(10)).tolist() + (-2*np.ones(10)).tolist() Js = (91*np.ones(10)).tolist() + (i*10 + np.arange(10)).tolist() ax1.scatter(Is, Js, 10) plt.show() """ Explanation: Finally, we plot the results End of explanation """
UWSEDS/short-course
LectureNotes/ExceptionsDebugging.ipynb
mit
X = [1, 2, 3) y = 4x + 3 """ Explanation: When Things Go Wrong: Errors, Exceptions, and Debugging Today we'll cover perhaps one of the most important aspects of using Python: dealing with errors and bugs in code. Three Classes of Errors Types of bugs/errors in code, from the easiest to the most difficult to diagnose: Syntax Errors: Errors where the code is not valid Python (generally easy to fix) Runtime Errors: Errors where syntactically valid code fails to execute (sometimes easy to fix) Semantic Errors: Errors in logic (often very difficult to fix) Syntax Errors Syntax errors are when you write code which is not valid Python. For example: End of explanation """ a = 4 something ==== is wrong print(a) """ Explanation: Note that if your code contains even a single syntax error, none of it will run: End of explanation """ print(Q) x = 1 + 'abc' X = 1 / 0 import numpy as np np.add(1, 2, 3, 4) x = [1, 2, 3] print(x[100]) """ Explanation: Even though the syntax error appears below the (valid) variable definition, the valid code is not executed. Runtime Errors Runtime errors occur when the code is valid python code, but are errors within the context of the program execution. For example: End of explanation """ spam = "my all-time favorite" eggs = 1 / 0 print(spam) """ Explanation: Unlike Syntax errors, RunTime errors occur during code execution, which means that valid code occuring before the runtime error will execute: End of explanation """ from math import sqrt def approx_pi(nterms=100): kvals = np.arange(nterms) return sqrt(12) * np.sum([-3.0 ** -k / (2 * k + 1) for k in kvals]) """ Explanation: Semantic Errors Semantic errors are perhaps the most insidious errors, and are by far the ones that will take most of your time. Semantic errors occur when the code is syntactically correct, but produces the wrong result. By way of example, imagine you want to write a simple script to approximate the value of $\pi$ according to the following formula: $$ \pi = \sqrt{12} \sum_{k = 0}^{\infty} \frac{(-3)^{-k}}{2k + 1} $$ You might write a function something like this, using numpy's vectorized syntax: End of explanation """ approx_pi(100) """ Explanation: Looks OK, yes? Let's try it out: End of explanation """ approx_pi(1000) """ Explanation: Huh. That doesn't look like $\pi$. Maybe we need more terms? End of explanation """ try: print("this block gets executed first") except: print("this block gets executed if there's an error") try: print("this block gets executed first") x = 1 / 0 # ZeroDivisionError print("we never get here") except: print("this block gets executed if there's an error") """ Explanation: Nope... it looks like the algorithm simply gives the wrong result. This is a classic example of a semantic error. Question: can you spot the problem? Runtime Errors and Exception Handling Now we'll talk about how to handle RunTime errors (we skip Syntax Errors because they're pretty self-explanatory). Runtime errors can be handled through "exception catching" using try...except statements. Here's a basic example: End of explanation """ def safe_divide(a, b): try: return a / b except: print("oops, dividing by zero. Returning None.") return None print(safe_divide(15, 3)) print(safe_divide(1, 0)) """ Explanation: Notice that the first block executes up until the point of the Runtime error. Once the error is hit, the except block is executed. One important note: the above clause catches any and all exceptions. It is not generally a good idea to catch-all. Better is to name the precise exception you expect: End of explanation """ safe_divide(15, 'three') """ Explanation: But there's a problem here: this is a catch-all exception, and will sometimes give us misleading information. For example: End of explanation """ def better_safe_divide(a, b): try: return a / b except ZeroDivisionError: print("oops, dividing by zero. Returning None.") return None better_safe_divide(15, 0) better_safe_divide(15, 'three') """ Explanation: Our program tells us we're dividing by zero, but we aren't! This is one reason you should almost never use a catch-all try..except statement, but instead specify the errors you're trying to catch: End of explanation """ def even_better_safe_divide(a, b): try: return a / b except ZeroDivisionError: print("oops, dividing by zero. Returning None.") return None except TypeError: print("incompatible types. Returning None") return None even_better_safe_divide(15, 3) even_better_safe_divide(15, 0) even_better_safe_divide(15, 'three') """ Explanation: This also allows you to specify different behaviors for different exceptions: End of explanation """ import os # the "os" module has useful operating system stuff def read_file(filename): if not os.path.exists(filename): raise ValueError("'{0}' does not exist".format(filename)) f = open(filename) result = f.read() f.close() return result """ Explanation: Remember this lesson, and always specify your except statements! I once spent an entire day tracing down a bug in my code which amounted to this. Raising Your Own Exceptions When you write your own code, it's good practice to use the raise keyword to create your own exceptions when the situation calls for it: End of explanation """ %%file tmp.txt this is the contents of the file read_file('tmp.txt') read_file('file.which.does.not.exist') """ Explanation: We'll use IPython's %%file magic to quickly create a text file End of explanation """ class NonExistentFile(RuntimeError): # you can customize exception behavior by defining class methods. # we won't discuss that here. pass def read_file(filename): if not os.path.exists(filename): raise NonExistentFile(filename) f = open(filename) result = f.read() f.close() return result read_file('tmp.txt') read_file('file.which.does.not.exist') """ Explanation: It is sometimes useful to define your own custom exceptions, which you can do easily via class inheritance: End of explanation """ try: print("doing something") except: print("this only happens if it fails") else: print("this only happens if it succeeds") try: print("doing something") raise ValueError() except: print("this only happens if it fails") else: print("this only happens if it succeeds") """ Explanation: Get used to throwing appropriate &mdash; and meaningful &mdash; exceptions in your code! It makes reading and debugging your code much, much easier. More Advanced Exception Handling There is also the possibility of adding else and finally clauses to your try statements. You'll probably not need these often, but in case you encounter them some time, it's good to know what they do. The behavior looks like this: End of explanation """ try: print("do something") except: print("this only happens if it fails") else: print("this only happens if it succeeds") finally: print("this happens no matter what.") try: print("do something") raise ValueError() except: print("this only happens if it fails") else: print("this only happens if it succeeds") finally: print("this happens no matter what.") """ Explanation: Why would you ever want to do this? Mainly, it prevents the code within the else block from being caught by the try block. Accidentally catching an exception you don't mean to catch can lead to confusing results. The last statement you might use is the finally statement, which looks like this: End of explanation """ try: print("do something") except: print("this only happens if it fails") else: print("this only happens if it succeeds") print("this happens no matter what.") """ Explanation: finally is generally used for some sort of cleanup (closing a file, etc.) It might seem a bit redundant, though. Why not write the following? End of explanation """ def divide(x, y): try: result = x / y except ZeroDivisionError: print("division by zero!") return None else: print("result is", result) return result finally: print("some sort of cleanup") divide(15, 3) divide(15, 0) """ Explanation: The main difference is when the clause is used within a function: End of explanation """ def entropy(p): p = np.asarray(p) # convert p to array if necessary items = p * np.log(p) return -np.sum(items) """ Explanation: Note that the finally clause is executed no matter what, even if the return statement has already executed! This makes it useful for cleanup tasks, such as closing an open file, restoring a state, or something along those lines. Handling Semantic Errors: Debugging Here is the most difficult piece of this lecture: handling semantic errors. This is the situation where your program runs, but doesn't produce the correct result. These errors are commonly known as bugs, and the process of correcting the bugs is debugging. There are three main methods commonly used for debugging Python code. In order of increasing sophistication, they are: Inserting print statements Injecting an IPython interpreter Using a line-by-line debugger like pdb The easiest method: print statements Say we're trying to compute the entropy of a set of probabilities. The form of the equation is $$ H = -\sum_i p_i \log(p_i) $$ We can write the function like this: End of explanation """ p = np.arange(5.) p /= p.sum() entropy(p) """ Explanation: Say these are our probabilities: End of explanation """ def entropy(p): p = np.asarray(p) # convert p to array if necessary print(p) items = p * np.log(p) print(items) return -np.sum(items) entropy(p) """ Explanation: We get nan, which stands for "Not a Number". What's going on here? Often the first thing to try is to simply print things and see what's going on. Within the file, you can add some print statements in key places: End of explanation """ %%file test_script.py import numpy as np def entropy(p): p = np.asarray(p) # convert p to array if necessary items = p * np.log(p) import IPython; IPython.embed() return -np.sum(items) p = np.arange(5.) p /= p.sum() entropy(p) """ Explanation: By printing some of the intermediate items, we see the problem: 0 * np.log(0) is resulting in a NaN. Though mathematically it's true that $\lim_{x\to 0} [x\log(x)] = 0$, the fact that we're performing the computation numerically means that we don't obtain this result. Often, inserting a few print statements can be enough to figure out what's going on. Embedding an IPython instance You can go a step further by actually embedding an IPython instance in your code. This doesn't work from within the notebook, so we'll create a file and run it from the command-line End of explanation """ def entropy(p): import pdb; pdb.set_trace() p = np.asarray(p) # convert p to array if necessary items = p * np.log(p) return -np.sum(items) entropy(p) """ Explanation: Now open a terminal and run this. You'll see that an IPython interpreter opens, and from there you can print p, print items, and do any manipulation you feel like doing. This can also be a nice way to debug a script. Using a Debugger Python comes with a built-in debugger called pdb. It allows you to step line-by-line through a computation and examine what's happening at each step. Note that this should probably be your last resort in tracing down a bug. I've probably used it a dozen times or so in five years of coding. But it can be a useful tool to have in your toolbelt. You can use the debugger by inserting the line python import pdb; pdb.set_trace() within your script. Let's try this out: End of explanation """ %%file numbers.dat 123 456 789 """ Explanation: This can be a more convenient way to debug programs and step through the actual execution. When you run this, you'll see the pdb prompt where you can enter one of several commands. If you type h for "help", it will list the possible commands: ``` (Pdb) h Documented commands (type help <topic>): ======================================== EOF bt cont enable jump pp run unt a c continue exit l q s until alias cl d h list quit step up args clear debug help n r tbreak w b commands disable ignore next restart u whatis break condition down j p return unalias where Miscellaneous help topics: exec pdb Undocumented commands: retval rv ``` Type h collowed by a command to see the documentation of that command: (Pdb) h n n(ext) Continue execution until the next line in the current function is reached or it returns. The most useful are probably the following: q(uit): quit the debugger and the program. c(ontinute): quit the debugger, continue in the program. n(ext): go to the next step of the program. list: show the current location in the file. &lt;enter&gt;: repeat the previous command. p(rint): print variables. s(tep into): step into a subroutine. r(eturn out): return out of a subroutine. Arbitrary Python code: writing Python code at the (Pdb) will execute it at that point in the program. We'll see more of this in the next section. IPython Debugging IPython also has some magic commands that allow you to debug scripts withing the notebook as soon as you see a failure. For example, imagine we have the following file: End of explanation """ def add_lines(filename): f = open(filename) lines = f.read().split() f.close() result = 0 for line in lines: result += line return result filename = 'numbers.dat' total = add_lines(filename) print(total) """ Explanation: And we want to execute the following function: End of explanation """ %debug """ Explanation: We get a type error. We can immediately open the debugger using IPython's %debug magic function. Remember to type q to quit! End of explanation """
AllenDowney/ThinkBayes2
workshop/workshop02soln.ipynb
mit
from __future__ import print_function, division %matplotlib inline import numpy as np from thinkbayes2 import Suite import thinkplot import warnings warnings.filterwarnings('ignore') """ Explanation: Bayesian Statistics Made Simple Code and exercises from my workshop on Bayesian statistics in Python. Copyright 2018 Allen Downey MIT License: https://opensource.org/licenses/MIT End of explanation """ class Bandit(Suite): def Likelihood(self, data, hypo): """ hypo is the prob of win (0-100) data is a string, either 'W' or 'L' """ x = hypo / 100 if data == 'W': return x else: return 1-x """ Explanation: The likelihood function Here's a definition for Bandit, which extends Suite and defines a likelihood function that computes the probability of the data (win or lose) for a given value of x (the probability of win). Note that hypo is in the range 0 to 100. End of explanation """ bandit = Bandit(range(101)) thinkplot.Pdf(bandit) thinkplot.Config(xlabel='x', ylabel='Probability') """ Explanation: We'll start with a uniform distribution from 0 to 100. End of explanation """ bandit.Update('L') thinkplot.Pdf(bandit) thinkplot.Config(xlabel='x', ylabel='Probability', legend=False) """ Explanation: Now we can update with a single loss: End of explanation """ bandit.Update('L') thinkplot.Pdf(bandit) thinkplot.Config(xlabel='x', ylabel='Probability', legend=False) """ Explanation: Another loss: End of explanation """ bandit.Update('W') thinkplot.Pdf(bandit) thinkplot.Config(xlabel='x', ylabel='Probability', legend=False) """ Explanation: And a win: End of explanation """ bandit = Bandit(range(101)) for outcome in 'WLLLLLLLLL': bandit.Update(outcome) thinkplot.Pdf(bandit) thinkplot.Config(xlabel='x', ylabel='Probability', legend=False) """ Explanation: Starting over, here's what it looks like after 1 win and 9 losses. End of explanation """ bandit.Mean() """ Explanation: The posterior mean is about 17% End of explanation """ bandit.MAP() """ Explanation: The most likely value is the observed proportion 1/10 End of explanation """ bandit.CredibleInterval(90) """ Explanation: The posterior credible interval has a 90% chance of containing the true value (provided that the prior distribution truly represents our background knowledge). End of explanation """ actual_probs = [0.10, 0.20, 0.30, 0.40] """ Explanation: Multiple bandits Now suppose we have several bandits and we want to decide which one to play. For this example, we have 4 machines with these probabilities: End of explanation """ from random import random from collections import Counter counter = Counter() def flip(p): return random() < p def play(i): counter[i] += 1 p = actual_probs[i] if flip(p): return 'W' else: return 'L' """ Explanation: The following function simulates playing one machine once. End of explanation """ for i in range(20): result = play(3) print(result, end=' ') """ Explanation: Here's a test, playing machine 3 twenty times: End of explanation """ prior = range(101) beliefs = [Bandit(prior) for i in range(4)] """ Explanation: Now I'll make 4 Bandit objects to represent our beliefs about the 4 machines. End of explanation """ options = dict(yticklabels='invisible') def plot(beliefs, **options): thinkplot.preplot(rows=2, cols=2) for i, b in enumerate(beliefs): thinkplot.subplot(i+1) thinkplot.Pdf(b, label=i) thinkplot.Config(**options) plot(beliefs, legend=True) """ Explanation: This function displays the four posterior distributions End of explanation """ def update(beliefs, i, outcome): beliefs[i].Update(outcome) for i in range(4): for _ in range(10): outcome = play(i) update(beliefs, i, outcome) plot(beliefs, legend=True) """ Explanation: Now suppose we play each machine 10 times. This function updates our beliefs about one of the machines based on one outcome. End of explanation """ [belief.Mean() for belief in beliefs] """ Explanation: After playing each machine 10 times, we have some information about their probabilies: End of explanation """ def choose(beliefs): ps = [b.Random() for b in beliefs] return np.argmax(ps) """ Explanation: Bayesian Bandits To get more information, we could play each machine 100 times, but while we are gathering data, we are not making good use of it. The kernel of the Bayesian Bandits algorithm is that is collects and uses data at the same time. In other words, it balances exploration and exploitation. The following function chooses among the machines so that the probability of choosing each machine is proportional to its "probability of superiority". Random chooses a value from the posterior distribution. argmax returns the index of the machine that chose the highest value. End of explanation """ choose(beliefs) """ Explanation: Here's an example. End of explanation """ def choose_play_update(beliefs, verbose=False): i = choose(beliefs) outcome = play(i) update(beliefs, i, outcome) if verbose: print(i, outcome, beliefs[i].Mean()) """ Explanation: Putting it all together, the following function chooses a machine, plays once, and updates beliefs: End of explanation """ counter = Counter() choose_play_update(beliefs, verbose=True) """ Explanation: Here's an example End of explanation """ beliefs = [Bandit(prior) for i in range(4)] """ Explanation: Trying it out Let's start again with a fresh set of machines: End of explanation """ num_plays = 100 for i in range(num_plays): choose_play_update(beliefs) plot(beliefs) """ Explanation: Now we can play a few times and see how beliefs gets updated: End of explanation """ for i, b in enumerate(beliefs): print(b.Mean(), b.CredibleInterval(90)) """ Explanation: We can summarize beliefs by printing the posterior mean and credible interval: End of explanation """ for machine, count in sorted(counter.items()): print(machine, count) """ Explanation: The credible intervals usually contain the true values (10, 20, 30, and 40). The estimates are still rough, especially for the lower-probability machines. But that's a feature, not a bug: the goal is to play the high-probability machines most often. Making the estimates more precise is a means to that end, but not an end itself. Let's see how many times each machine got played. If things go according to play, the machines with higher probabilities should get played more often. End of explanation """
AllenDowney/ModSimPy
notebooks/chap14.ipynb
mit
# Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim.py module from modsim import * """ Explanation: Modeling and Simulation in Python Chapter 14 Copyright 2017 Allen Downey License: Creative Commons Attribution 4.0 International End of explanation """ def make_system(beta, gamma): """Make a system object for the SIR model. beta: contact rate in days gamma: recovery rate in days returns: System object """ init = State(S=89, I=1, R=0) init /= np.sum(init) t0 = 0 t_end = 7 * 14 return System(init=init, t0=t0, t_end=t_end, beta=beta, gamma=gamma) def update_func(state, t, system): """Update the SIR model. state: State (s, i, r) t: time system: System object returns: State (sir) """ s, i, r = state infected = system.beta * i * s recovered = system.gamma * i s -= infected i += infected - recovered r += recovered return State(S=s, I=i, R=r) def run_simulation(system, update_func): """Runs a simulation of the system. system: System object update_func: function that updates state returns: TimeFrame """ init, t0, t_end = system.init, system.t0, system.t_end frame = TimeFrame(columns=init.index) frame.row[t0] = init for t in linrange(t0, t_end): frame.row[t+1] = update_func(frame.row[t], t, system) return frame def calc_total_infected(results): """Fraction of population infected during the simulation. results: DataFrame with columns S, I, R returns: fraction of population """ return get_first_value(results.S) - get_last_value(results.S) def sweep_beta(beta_array, gamma): """Sweep a range of values for beta. beta_array: array of beta values gamma: recovery rate returns: SweepSeries that maps from beta to total infected """ sweep = SweepSeries() for beta in beta_array: system = make_system(beta, gamma) results = run_simulation(system, update_func) sweep[system.beta] = calc_total_infected(results) return sweep def sweep_parameters(beta_array, gamma_array): """Sweep a range of values for beta and gamma. beta_array: array of infection rates gamma_array: array of recovery rates returns: SweepFrame with one row for each beta and one column for each gamma """ frame = SweepFrame(columns=gamma_array) for gamma in gamma_array: frame[gamma] = sweep_beta(beta_array, gamma) return frame """ Explanation: Code from previous chapters End of explanation """ beta_array = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 , 1.1] gamma_array = [0.2, 0.4, 0.6, 0.8] frame = sweep_parameters(beta_array, gamma_array) frame.head() frame.shape """ Explanation: Contact number Here's the SweepFrame from the previous chapter, with one row for each value of beta and one column for each value of gamma. End of explanation """ for gamma in frame.columns: column = frame[gamma] for beta in column.index: frac_infected = column[beta] print(beta, gamma, frac_infected) """ Explanation: The following loop shows how we can loop through the columns and rows of the SweepFrame. With 11 rows and 4 columns, there are 44 elements. End of explanation """ def plot_sweep_frame(frame): """Plot the values from a SweepFrame. For each (beta, gamma), compute the contact number, beta/gamma frame: SweepFrame with one row per beta, one column per gamma """ for gamma in frame.columns: column = frame[gamma] for beta in column.index: frac_infected = column[beta] plot(beta/gamma, frac_infected, 'ro') """ Explanation: Now we can wrap that loop in a function and plot the results. For each element of the SweepFrame, we have beta, gamma, and frac_infected, and we plot beta/gamma on the x-axis and frac_infected on the y-axis. End of explanation """ plot_sweep_frame(frame) decorate(xlabel='Contact number (beta/gamma)', ylabel='Fraction infected') savefig('figs/chap14-fig01.pdf') """ Explanation: Here's what it looks like: End of explanation """ s_inf_array = linspace(0.0001, 0.9999, 101); c_array = log(s_inf_array) / (s_inf_array - 1); """ Explanation: It turns out that the ratio beta/gamma, called the "contact number" is sufficient to predict the total number of infections; we don't have to know beta and gamma separately. We can see that in the previous plot: when we plot the fraction infected versus the contact number, the results fall close to a curve. Analysis In the book we figured out the relationship between $c$ and $s_{\infty}$ analytically. Now we can compute it for a range of values: End of explanation """ frac_infected = 1 - s_inf_array frac_infected_series = Series(frac_infected, index=c_array); """ Explanation: total_infected is the change in $s$ from the beginning to the end. End of explanation """ plot_sweep_frame(frame) plot(frac_infected_series, label='Analysis') decorate(xlabel='Contact number (c)', ylabel='Fraction infected') savefig('figs/chap14-fig02.pdf') """ Explanation: Now we can plot the analytic results and compare them to the simulations. End of explanation """ # Solution goes here # Solution goes here # Solution goes here """ Explanation: The agreement is generally good, except for values of c less than 1. Exercises Exercise: If we didn't know about contact numbers, we might have explored other possibilities, like the difference between beta and gamma, rather than their ratio. Write a version of plot_sweep_frame, called plot_sweep_frame_difference, that plots the fraction infected versus the difference beta-gamma. What do the results look like, and what does that imply? End of explanation """ # Solution goes here # Solution goes here """ Explanation: Exercise: Suppose you run a survey at the end of the semester and find that 26% of students had the Freshman Plague at some point. What is your best estimate of c? Hint: if you print frac_infected_series, you can read off the answer. End of explanation """
jepegit/cellpy
dev_utils/easyplot/EasyPlot_Demo.ipynb
mit
from cellpy.utils import easyplot """ Explanation: Easyplot user guide Easyplot is a submodule found in the utils of cellpy. It takes a list of filenames and plots these corresponding to the users input configuration. Please follow the example below to learn how to use it. 1: Import cellpy and easyplot End of explanation """ files = [ # "./data/raw/20160805_test001_45_cc_01.res", # "./data/raw/20160805_test001_45_cc_01_copy.res"# , # "./data/20210430_seam10_01_01_cc_01_Channel_48_Wb_1.xlsx.csv# ", # "./data/20210430_seam10_01_02_cc_01_Channel_49_Wb_1.xlsx.cs# v", # "20210630_seam13_03_02_cc_# 01", # "20210630_seam13_03_03_cc# _01", # "20210630_seam13_04_01_c# c_01 # "20210630_seam13_04_02_# cc_01", # "20210630_seam13_04_03_cc_01", ] """ Explanation: 2: Specify a list of datafiles You can insert different filetypes as long as they are supported by the automatic detection of cellpy. Currently, if you want to use data from an SQL database, you can only plot other data from the same database. In addition you must specify the server credentials. See step below. If the file you want is in an arbin SQL database, insert the testname. End of explanation """ easyplot.help() ezplt = easyplot.EasyPlot( files, None, cyclelife_plot=True, cyclelife_percentage=False, cyclelife_coulombic_efficiency=True, cyclelife_coulombic_efficiency_ylabel="Coulombic efficiency [%]", cyclelife_xlabel="Cycles", cyclelife_ylabel=r"Capacity $\left[\frac{mAh}{g}\right]$", cyclelife_ylabel_percent="Capacity retention [%]", cyclelife_legend_outside=True, # if True, the legend is placed outside the plot galvanostatic_plot=True, galvanostatic_potlim=(0, 1), # min and max limit on potential-axis galvanostatic_caplim=None, galvanostatic_xlabel=r"Capacity $\left[\frac{mAh}{g}\right]$", galvanostatic_ylabel="Cell potential [V]", dqdv_plot=True, dqdv_potlim=None, # min and max limit on potential-axis dqdv_dqlim=None, dqdv_xlabel="Cell potential [V]", dqdv_ylabel=r"dQ/dV $\left[\frac{mAh}{gV}\right]$", specific_cycles=None, # [] exclude_cycles=[1, 2], all_in_one=False, # only_dischg = True, only_chg=False, outpath="./ezplots/deleteme/", figsize=(6, 4), # 6 inches wide, 4 inches tall figres=100, # Dots per inch figtitle=None, # None = original filepath ) """ Explanation: 3: Spawn easyplot object with desired settings All possible settings can be printed by running easyplot.help() End of explanation """ ezplt.set_arbin_sql_credentials("localhost", "sa", "Amund1234", "SQL Server") """ Explanation: 3a: SQL settings If you want to use the Arbin SQL database reader, you must insert the necessary details. This is done by the easyplot function set_arbin_sql_credentials \ easyplot.set_arbin_sql_credentials(&lt;IP Address&gt;, &lt;Username&gt;, &lt;Password&gt;, &lt;SQL driver type&gt;) End of explanation """ ezplt.plot() """ Explanation: 4: Run easyplot! End of explanation """
IanHawke/maths-with-python
03-loops-control-flow.ipynb
mit
from math import pi def degrees_to_radians(theta_d): """ Convert an angle from degrees to radians. Parameters ---------- theta_d : float The angle in degrees. Returns ------- theta_r : float The angle in radians. """ theta_r = pi / 180.0 * theta_d return theta_r """ Explanation: Loops while loops The program we wrote at the end of the last part performs a series of repetitive operations, but did it by copying and pasting the call to our function and editing the input. As noted above, this is a likely source of errors. Instead we want to write a formula, algorithm or abstraction of our repeated operation, and reproduce that in code. First we reproduce that function: End of explanation """ theta_d = 0.0 while theta_d <= 90.0: print(degrees_to_radians(theta_d)) theta_d = theta_d + 15.0 """ Explanation: We showed above how to use this code to print the angles $(n \pi)/ 12$ for $n = 1, 2, \dots, 6$. We did this by calling the degrees_to_radians function on the angles $15 n$ degrees for $n = 1, 2, \dots, 6$. So this is the formula we want to reproduce in code. To do that we write a loop. Here is a standard way to do it: End of explanation """ steps = 1, 2, 3, 4, 5, 6 for n in steps: print(degrees_to_radians(15*n)) """ Explanation: Let's examine this line by line. The first line defines the angle in degrees, theta_d. We start from $\theta_d=0$. The next line defines the loop. This has similarities to our definition of a function. We use the keyword while to say that what follows is going to be a loop. We then define a logical condition that will be either True or False. Whilst the condition is True, the statements in the loop will be executed. The colon : at the end of the line ends the logical condition and says that what follows will be the statements inside the loop. As with the function, the code block with the statements inside the loop is indented by four spaces or one tab. The code block contains two lines, both of which will be executed. The first prints the converted angle to the screen. The second increases the angle in degrees by 15. At the end of the code block, Python will check the logical condition theta_d &lt;= 90.0 again. If it is True the statements inside the loop are executed again. If it is False, the code moves on to the next line after the loop. There is another way to write a loop that we can use: for loops End of explanation """ for n in range(1,7): print(degrees_to_radians(15*n)) """ Explanation: Let's examine this code line by line. It first defines a set of numbers, steps, which contains the integers from 1 to 6 (we will make this more precise later when we discuss lists and tuples). We then define the loop using the for command. This looks at the set of numbers steps and picks an entry out one at a time, setting the variable n to be the value of that member of the set. So, the first time through the loop n=1. The next, n=2. Once it has iterated through all members of the set steps, it stops. The colon : at the end of the line defines the code block that each iteration of the loop should perform. Exactly as when we defined a function, the code block is indented by four spaces or one tab. In each iteration through the loop, the commands indented by this amount will be run. In this case, only one line (the print... line) will be run. On each iteration the value of n changes, leading to the different angle. Writing out a long list of integers is a bad idea. A better approach is the use the range function. This compresses the code to: End of explanation """ print(list(range(4))) print(list(range(-1,3))) print(list(range(1,10,2))) """ Explanation: The range function takes the input arguments &lt;start&gt; and &lt;end&gt;, and the optional input argument &lt;step&gt;, to produce the integers from the start up to, but not including, the end. If the &lt;start&gt; is not given it defaults to 0, and if the &lt;step&gt; is not given it defaults to 1. (Strictly, range does not return the full list in one go. It generates the results one at a time. This is much faster and more efficient. In the for loop this is all you need. To actually view what range generates all together, convert it to a list, as list(range(...))). Check this against examples such as: End of explanation """ angles = 15.0, 30.0, 45.0, 60.0, 75.0, 90.0 for angle in angles: print(degrees_to_radians(angle)) """ Explanation: In some programming languages this is where the discussion of a for loop would end: the "loop counter" must be an integer. In Python, a loop is just iterating over a set of values, and these can be much more general. An alternative way (using floats) to do the same loop would be End of explanation """ things = 1, 2.3, True, degrees_to_radians for thing in things: print(thing) """ Explanation: But we can get much more general than that. The different things in the set don't have to have the same type: End of explanation """ theta_d = 0.0 while degrees_to_radians(theta_d) < 4.0: theta_d = theta_d + 1.0 """ Explanation: This can be used to write very efficient code, but is a feature that isn't always available in other programming languages. When should we use for loops and when while loops? In most cases either will work. Different algorithms have different conventions, so where possible follow the convention. The advantage of the for loop is that it is clearer how much work will be done by the loop (as, in principle, we know how many times the loop block will be executed). However, sometimes you need to perform a repetitive operation but don't know in advance how often you'll need to do it. For example, to the nearest degree, what is the largest angle $\theta_d$ that when converted to radians is less than 4? (For the picky, we're restricting $\theta_d$ so that $0^{\circ} \le \theta_d < 360^{\circ}$). Rather than doing the sensible thing and doing the analytic calculation, we can do the following: Set $\theta_d = 0^{\circ}$. Calculate the angle in radians $\theta_r$. If $\theta_r < 4$: Increase $\theta_d$ by $1^{\circ}$; Repeat from step 2. We can reproduce this algorithm using a while loop: End of explanation """ print(theta_d - 1.0) print(theta_d) print(degrees_to_radians(theta_d-1.0) / 4.0) print(degrees_to_radians(theta_d) / 4.0) """ Explanation: This could be done in a for loop, but not so straightforwardly. To summarize: The structure of the while loop is similar to the for loop. The loop is defined by a keyword (while or for) and the end of the line defining the loop condition is given by a colon. With each iteration of the loop the indented code is executed. The difference is in how the code decides when to stop looping, and what changes with each iteration. In the for loop the code iterates over the objects in a set, and some variable is modified with each iteration based on the new object. Once all objects in the set have been iterated over, the loop stops. In a while loop some condition is checked; while it is true the loop continues, and as soon as it is false the loop stops. Here we are checking if $\theta_r$, given by degrees_to_radians(theta_d), is still less than 4. However, nothing in the definition of the loop actually changes: it is the statement within the loop that actually changes the angle $\theta_d$. We quickly check that the answer given makes sense: End of explanation """ print(True) print(6 < 7 and 10 > 9) print(1 < 2 or 1 < 0) print(not (6 < 7) and 10 > 9) print(6 < 7 < 8) """ Explanation: We see that the answer is $229^{\circ}$. Logical statements We noted above that whether or not a while loop executes depends on the truth (or not) of a particular statement. In programming these logical statements take Boolean values (either true or false). In Python, the values of a Boolean statement are either True or False, which are the keywords used to refer to them. Multiple statements can be chained together using the logical operators and, or, not. For example: End of explanation """ list1 = [1, 2, 3, 4, 5, 6] list2 = [15.0, 30.0, 45.0, 60.0, 75.0, 90.0] list3 = [1, 2.3, True, degrees_to_radians] list4 = ["hello", list1, False] list5 = [] """ Explanation: The last example is particularly important, as this chained example (6 &lt; 7 &lt; 8) is equivalent to (6 &lt; 7) and (7 &lt; 8). This checks both inequalities - checking that $6 < x < 8$ when $x=7$, by checking that both $6 < x$ and $x < 8$ is true when $x=7$, which is true (and mathematically what you would expect). However, many programming languages would not interpret it this way, but would instead interpret it as (6 &lt; 7) &lt; 8, which is equivalent to True &lt; 8, which is nonsense. Chaining operations in this way is useful in Python, but don't expect it to always work in other languages. Containers and Sequences When talking about loops we informally introduced the collection of objects 1, 2, 3, 4, 5, 6, and assigned it to a single variable steps. This is one of many types of container: a single object that contains other objects. If the objects in the container have an order then the container is often called a sequence: an object that contains an ordered sequence of other objects. These sorts of objects are everywhere in mathematics: sets, groups, vectors, matrices, equivalence classes, categories, ... Programming languages also implement a large number of them. Python has four essential containers, the most important of which for our purposes are lists, tuples, and dictionaries. Lists A list is an ordered collection of objects. For example: End of explanation """ list1[0] list2[3] """ Explanation: Lists are defined by square brackets, []. Objects in the list are separated by commas. A list can be empty (list5 above). A list can contain other lists (list4 above). The objects in the list don't have to have the same type (list3 and list4 above). We can access a member of a list by giving its name, square brackets, and the index of the member (starting from 0!): End of explanation """ list4[1] = "goodbye" list4 """ Explanation: Note There is a big divide between programming languages that index containers (or vectors, or lists) starting from 0 and those that index starting from 1. There is no consensus on which is better, so as you move between languages, get used to checking which is used. Entries in a list can be modified: End of explanation """ list4.append('end') list4 """ Explanation: Additional entries can be appended onto the end of a list: End of explanation """ entry = list4.pop() print(entry) list4 """ Explanation: Entries can be removed (popped) from the end of a list: End of explanation """ len(list4) """ Explanation: The length of a list can be found: End of explanation """ tuple1 = 1, 2, 3, 4, 5, 6 tuple2 = (15.0, 30.0, 45.0, 60.0, 75.0, 90.0) tuple3 = (1, 2.3, True, degrees_to_radians) tuple4 = ("hello", list1, False) tuple5 = () tuple6 = (5,) """ Explanation: Lists are probably the most used container, but there's a closely related container that we've already used: the tuple. Tuples Tuples are ordered collections of objects that, once created, cannot be modified. For example: End of explanation """ tuple1[0] tuple4[1] = "goodbye" """ Explanation: Tuples are defined by the commas separating the entries. The round brackets () surrounding the entries are conventional, useful for clarity, and for grouping. If you want to create an empty tuple (tuple5) the round brackets are necessary. A tuple containing a single entry (tuple6) must have a trailing comma. Tuples can be accessed in the same ways as lists, and their length found with len in the same way. But they cannot be modified, so we cannot add additional entries, or remove them, or alter any: End of explanation """ print(tuple4[1]) tuple4[1][1] = 33 print(tuple4[1]) """ Explanation: However, if a member of a tuple can itself be modified (for example, it's a list, as tuple4[1] is), then that entry can be modified: End of explanation """ converted_list1 = list(tuple1) converted_tuple1 = tuple(list1) """ Explanation: Tuples appear a lot when using functions, either when passing in parameters, or when returning results. They can often be treated like lists, and there are functions that convert lists to tuples and vice versa: End of explanation """ list1 = [1, 2, 3, 4, 5, 6] print(list1[0]) print(list1[1:3]) print(list1[2:]) print(list1[:4]) """ Explanation: Slicing Accessing and manipulating multiple entries of a list at once is an efficient and effective way of coding: it shows up a lot in, for example, linear algebra. This is where slicing comes in. We have seen that we can access a single element of a list using square brackets and an index. We can use similar notation to access multiple elements: End of explanation """ print(list1[0:6:2]) print(list1[1::3]) print(list1[4:1:-1]) """ Explanation: The slicing notation [&lt;start&gt;:&lt;end&gt;] returns all entries from the &lt;start&gt; to the entry before the &lt;end&gt;. So: list1[1:3] returns all entries from the second (index 1) to the third (the entry before index 3). if &lt;end&gt; is not given all entries up to the end are returned, so list1[2:] returns all entries from the third (index 2) to the end. if &lt;start&gt; is not given all entries from the start are returned, so list1[:4] returns all entries from the start until the fourth (the entry before index 4). There are a number of other ways that slicing can be used. First, we can specify the step: End of explanation """ print(list1[-1]) print(list1[-2]) print(list1[2:-2]) print(list1[-4:-2]) """ Explanation: By using a negative step we can reverse the order (as shown in the final example), but then we need to be careful with the &lt;start&gt; and &lt;end&gt;. This &lt;start&gt;:&lt;end&gt;:&lt;step&gt; notation varies between programming languages: some use &lt;start&gt;:&lt;step&gt;:&lt;end&gt;. Second, we can give an index that counts from the end, where the final entry is -1: End of explanation """ list_slice = [0, 0, 0, 0, 0, 0, 0, 0] list_slice[1:4] = list1[3:] print(list_slice) """ Explanation: Unpacking Slicing is often seen as part of assignment. For example End of explanation """ a, b, c = list1[3:] print(a) print(b) print(c) """ Explanation: This is related to a very useful Python feature: unpacking. Normally we have assigned a single variable to a single value (although that value might be a container such as a list). However, we can assign multiple values in one go: End of explanation """ a, b = b, a print(a) print(b) """ Explanation: This can be used to directly swap two variables, for example: End of explanation """ from math import sin, cos, exp, log functions = {"sine" : sin, "cosine" : cos, "exponential" : exp, "logarithm" : log} print(functions) """ Explanation: The number of entries on both sides must match. Dictionaries All the containers we have seen so far have had an order - lists and tuples are sequences, and we access the objects within them using list[0] or tuple[3], for example. A dictionary is our first unordered container. These are useful for collections of objects with meaningful names, but where the order of the objects has no importance. Dictionaries are defined using curly braces. The "name" of each entry is given first (usually called its key), followed by a :, and then after the colon comes its value. Multiple entries are separated by commas. For example: End of explanation """ print(functions["exponential"]) """ Explanation: Note that the order it prints out need not match the order we entered the values in. In fact, the order could change if we used a different machine, or entered the values again. This emphasizes the unordered nature of dictionaries. To access an individual value, we use its key: End of explanation """ print(functions.keys()) print(functions.values()) """ Explanation: To find all the keys or values we can use dictionary methods: End of explanation """ for name in functions: print("The result of {}(1) is {}.".format(name, functions[name](1.0))) """ Explanation: Depending on the version of Python you are using, this might either give a list or an iterator. When iterating over a dictionary (for k in dict:) the key is returned, as if we had said for key in dict.keys():. This is most useful in a loop, such as the following. Think carefully about this code, and make sure you understand what is happening! End of explanation """ for name, function in functions.items(): print("The result of {}(1) is {}.".format(name, function(1.0))) """ Explanation: To explain: The first line says that we are going to iterate over each entry in the dictionary by assigning the value of the key (which is the name of the function in functions) to the variable name. The next line extracts the function associated with that name using functions[name] and applies that function to the value 1 using functions[name](1.0). The full line prints out the result in a human readable form. But Python has other ways of iterating over dictionaries that can make life even easier, such as the items function: End of explanation """ theta_d = 5134.6 theta_d_normalized = theta_d % 360.0 print(theta_d_normalized) """ Explanation: So this does exactly the same thing as the previous loop, and most of the code is the same. However, rather than accessing the dictionary each time (using functions[name]), the value in the dictionary has been returned at the start. What is happening is that the items function is returning both the key and the value as a tuple on each iteration through the loop. The name, function notation then uses unpacking to appropriately set the variables. This form is "more pythonic" (ie, is shorter, clearer to many people, and faster). Above we have always set the key to be a string. This is not necessary - it can be an integer, or a float, or any constant object. We have also set the values to have the same type. As with lists this is not necessary. Dictionaries are very useful for adding simple structure to your code, and allow you to pass around complex sets of parameters easily. Control flow Not every algorithm can be expressed as a single mathematical formula in the manner used so far. Alternatively, it may make the algorithm appear considerably simpler if it isn't expressed in one complex formula but in multiple simpler forms. This is where the computer has to be able to make choices, to control when and if a particular formula is used. As a simple example, if we used our degrees_to_radians calculation as previously given, then if the angle $\theta_d$ is outside the standard $[0, 360]^{\circ}$ interval then the converted angle in radians $\theta_r$ will be outside the $[0, 2\pi]$ interval. Suppose we want to "normalize" all our angles to lie within the $[0, 2\pi]$ interval. We could use modular arithmetic using the % operator: End of explanation """ from math import pi def check_angle_normalized(theta_d): """ Check that an angle lies within [0, 360] degrees. Parameters ---------- theta_d : float The angle in degrees. Returns ------- normalized : Boolean Whether the angle lies within the range """ normalized = True if theta_d > 360.0: normalized = False print("Input angle greater than 360 degrees. Did you mean this?") if theta_d < 0.0: normalized = False print("Input angle less than 0 degrees. Did you mean this?") return normalized theta_d = 5134.6 print(check_angle_normalized(theta_d)) theta_d = -52.3 print(check_angle_normalized(theta_d)) """ Explanation: But it might be that the input is just wrong: there's a typo, and the caller should be warned, and maybe the input completely rejected. We can write a different function to check that. End of explanation """ from math import pi def check_angle_normalized(theta_d): """ Check that an angle lies within [0, 360] degrees. Parameters ---------- theta_d : float The angle in degrees. Returns ------- normalized : Boolean Whether the angle lies within the range """ normalized = True if theta_d > 360.0: normalized = False print("Input angle greater than 360 degrees. Did you mean this?") elif theta_d < 0.0: normalized = False print("Input angle less than 0 degrees. Did you mean this?") else: print("Input angle in range [0, 360] degrees. Good.") return normalized """ Explanation: The control flow here uses the if statement. As with loops such as the for and while loops we have a condition which is checked which, if satisfied, leads to the indented code block after the colon being executed. The logical statements theta_d &gt; 360.0 and theta_d &lt; 0.0 are evaluated and return either True or False (which is how Python represents boolean values). If True, then the statement is executed. We could use only a single logical statement to check if $\theta_d$ lies in an acceptable range by using logical relations. For example, we could replace the two if statements by the single statement python if (theta_d &gt; 360.0) or (theta_d &lt; 0.0): normalized = False print("Input angle outside [0, 360] degrees. Did you mean this?") The logical statement (theta_d &gt; 360.0) or (theta_d &lt; 0.0) is either True or False as above. In addition to the logical or statement, Python also has the logical and and logical not statements, from which more complex statements can be generated. Often we want to do one thing if a condition is true, and another if the condition is false. A full example of this would be to rewrite the whole function as: End of explanation """ theta_d = 543.2 print(check_angle_normalized(theta_d)) theta_d = -123.4 print(check_angle_normalized(theta_d)) theta_d = 89.12 print(check_angle_normalized(theta_d)) """ Explanation: The elif statement allows another condition to be checked - it is how Python represents "else if", or "all previous checks have been false; let's check this statement as well". Multiple elif blocks can be included to check more conditions. The else statement contains no logical check: this code block will always be executed if all previous statements were false. For example: End of explanation """ angles = [-123.4, 543.2, 89.12, 0.67, 5143.6, 30.0, 270.0] # We run through all the angles, but only print those that are # - in the range [0, 360], and # - if sin^2(angle) < 0.5 from math import sin for angle in angles: print("Input angle in degrees:", angle) if (check_angle_normalized(angle)): angle_r = degrees_to_radians(angle) if (sin(angle_r)**2 < 0.5): print("Valid angle in radians:", angle_r) """ Explanation: We can nest statements as deep as we like, nesting loops and control flow statements as we go. We have to ensure that the indentation level is consistent. Here is a silly example. End of explanation """ import breakpoints print(breakpoints.test_sequence(10)) print(breakpoints.test_sequence(100)) print(breakpoints.test_sequence(1000)) """ Explanation: Debugging Earlier we saw how to read error messages to debug single statements. When we start including loops and functions it may be more complex and the information from the error message alone, whilst useful, may not be enough. In these more complex cases the reason for the error depends on the calculations inside the code, and the steps through the code need inspecting in detail. This is where a debugger is useful. It allows you to run the code, pause at specific points or conditions, step through it as it runs line-by-line, and inspect all the values as you go. There are a number of Python debuggers - pdb and ipdb being the most basic. However, spyder has a debugger built in, and learning to use it will make your life considerably easier. Breakpoints The main use of the debugger is to inspect the internal state of a code whilst it is running. To do that we have to stop the execution of the code somewhere. This is typically done using breakpoints. Copy the following function into a file named breakpoints.py: ```python def test_sequence(N): """ Compute the infinite sum of 2^{-n} starting from n = 0, truncating at n = N, returning the value of 2^{-n} and the truncated sum. Parameters ---------- N : int Positive integer, giving the number of terms in the sum Returns ------- limit : float The value of 2^{-N} partial_sum : float The value of the truncated sum Notes ----- The limiting value should be zero, and the value of the sum should converge to 2. """ # Start sum from zero, so give zeroth term limit = 1.0 partial_sum = 1.0 # At each step, increment sum and change summand for n in range(1, N+1): partial_sum = partial_sum + limit limit = limit / 2.0 return limit, partial_sum if name == 'main': print(test_sequence(50)) ``` This computes the value $2^{-N}$ and the partial sum $\sum_{n=0}^N 2^{-n}$. The limit as $N\to\infty$ of the value should be zero, and of the sum should be two. The final two lines ensures that, if the file is run as a python script, the function will be called (with N=50). The if statement is a standard Python convention: if you have code that you want executed only if the file is run as a script, and not if the file is imported as a module. If we run the function we find it does not work as expected: End of explanation """
kaleoyster/nbi-data-science
Deterioration Curves/(Southeast) Deterioration+Curves++and+Classification+of+Bridges+in+the+Southeast+United+States.ipynb
gpl-2.0
import pymongo from pymongo import MongoClient import time import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import csv """ Explanation: Libraries and Packages End of explanation """ Client = MongoClient("mongodb://bridges:readonly@nbi-mongo.admin/bridge") db = Client.bridge collection = db["bridges"] """ Explanation: Connecting to National Data Service: The Lab Benchwork's NBI - MongoDB instance End of explanation """ def getData(state): pipeline = [{"$match":{"$and":[{"year":{"$gt":1991, "$lt":2017}},{"stateCode":state}]}}, {"$project":{"_id":0, "structureNumber":1, "yearBuilt":1, "deck":1, ## rating of deck "year":1, ## survey year "substructure":1, ## rating of substructure "superstructure":1, ## rating of superstructure }}] dec = collection.aggregate(pipeline) conditionRatings = pd.DataFrame(list(dec)) conditionRatings['Age'] = conditionRatings['year'] - conditionRatings['yearBuilt'] return conditionRatings """ Explanation: Deterioration curves of Southeast United States For demonstration purposes, the results only focuses on the states in the South-East United States which includes: West Virginia,Virginia, Kentucky, Tennessee, North Carolina, South Carolina, Georgia, Alabama, Mississippi, Arkansas, Louisiana, Florida The classification of the bridge into slow deteriorating, fast deteriorating, and average deteriorating is done based on bridge's rate of deterioration. Therefore, In this section will demonstrate how bridges deteriorate over time in the South-East United States. To plot the deterioration curve of bridges in every state of South-East United States, bridges were grouped by their age. As a result, There are 60 groups of bridges from age 1 to 60, The mean of the condition rating of the deck, superstructure, and substructure of the bridge is plotted for every age. Extracting Data of South-East United States of the United states from 1992 - 2016. The following query will extract data from the mongoDB instance and project only selected attributes such as structure number, yearBuilt, deck, year, superstructure, and subtructure. End of explanation """ def getMeanRatings(state,startAge, endAge, startYear, endYear): conditionRatings = getData(state) conditionRatings = conditionRatings[['structureNumber','Age','superstructure','deck','substructure','year']] conditionRatings = conditionRatings.loc[~conditionRatings['superstructure'].isin(['N','NA'])] conditionRatings = conditionRatings.loc[~conditionRatings['substructure'].isin(['N','NA'])] conditionRatings = conditionRatings.loc[~conditionRatings['deck'].isin(['N','NA'])] #conditionRatings = conditionRatings.loc[~conditionRatings['Structure Type'].isin([19])] #conditionRatings = conditionRatings.loc[~conditionRatings['Type of Wearing Surface'].isin(['6'])] maxAge = conditionRatings['Age'].unique() tempConditionRatingsDataFrame = conditionRatings.loc[conditionRatings['year'].isin([i for i in range(startYear, endYear+1, 1)])] MeanDeck = [] StdDeck = [] MeanSubstructure = [] StdSubstructure = [] MeanSuperstructure = [] StdSuperstructure = [] ## start point of the age to be = 1 and ending point = 100 for age in range(startAge,endAge+1,1): ## Select all the bridges from with age = i tempAgeDf = tempConditionRatingsDataFrame.loc[tempConditionRatingsDataFrame['Age'] == age] ## type conversion deck rating into int listOfMeanDeckOfAge = list(tempAgeDf['deck']) listOfMeanDeckOfAge = [ int(deck) for deck in listOfMeanDeckOfAge ] ## takeing mean and standard deviation of deck rating at age i meanDeck = np.mean(listOfMeanDeckOfAge) stdDeck = np.std(listOfMeanDeckOfAge) ## type conversion substructure rating into int listOfMeanSubstructureOfAge = list(tempAgeDf['substructure']) listOfMeanSubstructureOfAge = [ int(substructure) for substructure in listOfMeanSubstructureOfAge ] meanSub = np.mean(listOfMeanSubstructureOfAge) stdSub = np.std(listOfMeanSubstructureOfAge) ## type conversion substructure rating into int listOfMeanSuperstructureOfAge = list(tempAgeDf['superstructure']) listOfMeanSuperstructureOfAge = [ int(superstructure) for superstructure in listOfMeanSuperstructureOfAge ] meanSup = np.mean(listOfMeanSuperstructureOfAge) stdSup = np.std(listOfMeanSuperstructureOfAge) #Append Deck MeanDeck.append(meanDeck) StdDeck.append(stdDeck) #Append Substructure MeanSubstructure.append(meanSub) StdSubstructure.append(stdSub) #Append Superstructure MeanSuperstructure.append(meanSup) StdSuperstructure.append(stdSup) return [MeanDeck, StdDeck ,MeanSubstructure, StdSubstructure, MeanSuperstructure, StdSuperstructure] """ Explanation: Filtering Null Values, Converting JSON format to Dataframes, and Calculating Mean Condition Ratings of Deck, Superstructure, and Substucture After NBI data is extracted. The Data has to be filtered to remove data points with missing values such as 'N', 'NA'. The mean condition rating for all the components: Deck, Substructure, and Superstructe, has to be calculated. End of explanation """ states = ['54','51','21','47','37','45','13','01','28','02','22','12'] # state code to state abbreviation stateNameDict = {'25':'MA', '04':'AZ', '08':'CO', '38':'ND', '09':'CT', '19':'IA', '26':'MI', '48':'TX', '35':'NM', '17':'IL', '51':'VA', '23':'ME', '16':'ID', '36':'NY', '56':'WY', '29':'MO', '39':'OH', '28':'MS', '11':'DC', '21':'KY', '18':'IN', '06':'CA', '47':'TN', '12':'FL', '24':'MD', '34':'NJ', '46':'SD', '13':'GA', '55':'WI', '30':'MT', '54':'WV', '15':'HI', '32':'NV', '37':'NC', '10':'DE', '33':'NH', '44':'RI', '50':'VT', '42':'PA', '05':'AR', '20':'KS', '45':'SC', '22':'LA', '40':'OK', '72':'PR', '41':'OR', '27':'MN', '53':'WA', '01':'AL', '31':'NE', '02':'AK', '49':'UT' } def getBulkMeanRatings(states, stateNameDict): # Initializaing the dataframes for deck, superstructure and subtructure df_mean_deck = pd.DataFrame({'Age':range(1,61)}) df_mean_sup = pd.DataFrame({'Age':range(1,61)}) df_mean_sub = pd.DataFrame({'Age':range(1,61)}) df_std_deck = pd.DataFrame({'Age':range(1,61)}) df_std_sup = pd.DataFrame({'Age':range(1,61)}) df_std_sub = pd.DataFrame({'Age':range(1,61)}) for state in states: meanDeck, stdDeck, meanSub, stdSub, meanSup, stdSup = getMeanRatings(state,1,100,1992,2016) stateName = stateNameDict[state] df_mean_deck[stateName] = meanDeck[:60] df_mean_sup[stateName] = meanSup[:60] df_mean_sub[stateName] = meanSub[:60] df_std_deck[stateName] = stdDeck[:60] df_std_sup[stateName] = stdSup[:60] df_std_sub[stateName] = stdSub[:60] return df_mean_deck, df_mean_sup, df_mean_sub, df_std_deck, df_std_sup, df_std_sub df_mean_deck, df_mean_sup, df_mean_sub, df_std_deck, df_std_sup, df_std_sub = getBulkMeanRatings(states, stateNameDict) """ Explanation: Creating DataFrames of the Mean condition ratings of the deck, superstructure and substructure The calculated Mean Condition Ratings of deck, superstructure, and substructure are now stored in seperate dataframe for the convience. End of explanation """ %matplotlib inline palette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red','silver','purple', 'gold', 'black','olive' ] plt.figure(figsize = (10,8)) index = 0 for state in states: index = index + 1 stateName = stateNameDict[state] plt.plot(df_mean_deck['Age'],df_mean_deck[stateName], color = palette[index]) plt.legend([stateNameDict[state] for state in states],loc='upper right', ncol = 2) plt.xlim(1,60) plt.ylim(1,9) plt.title('Mean Deck Rating Vs Age') plt.xlabel('Age') plt.ylabel('Mean Deck Rating') plt.figure(figsize = (16,12)) plt.xlabel('Age') plt.ylabel('Mean') # Initialize the figure plt.style.use('seaborn-darkgrid') # create a color palette #palette = plt.get_cmap('gist_ncar') palette = [ 'blue', 'blue', 'green','magenta','cyan','brown','grey','red','silver','purple','gold','black','olive' ] # multiple line plot num=1 for column in df_mean_deck.drop('Age', axis=1): # Find the right spot on the plot plt.subplot(4,3, num) # Plot the lineplot plt.plot(df_mean_deck['Age'], df_mean_deck[column], marker='', color=palette[num], linewidth=4, alpha=0.9, label=column) # Same limits for everybody! plt.xlim(1,60) plt.ylim(1,9) # Not ticks everywhere if num in range(10) : plt.tick_params(labelbottom='off') if num not in [1,4,7,10]: plt.tick_params(labelleft='off') # Add title plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num]) plt.text(30, -1, 'Age', ha='center', va='center') plt.text(1, 4, 'Mean Deck Rating', ha='center', va='center', rotation='vertical') num = num + 1 # general title plt.suptitle("Mean Deck Rating vs Age \nIndividual State Deterioration Curves", fontsize=13, fontweight=0, color='black', style='italic', y=1.02) """ Explanation: Deterioration Curves - Deck End of explanation """ palette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red','silver','purple', 'gold', 'black','olive' ] plt.figure(figsize = (10,8)) index = 0 for state in states: index = index + 1 stateName = stateNameDict[state] plt.plot(df_mean_sup['Age'],df_mean_sup[stateName], color = palette[index]) plt.legend([stateNameDict[state] for state in states],loc='upper right', ncol = 2) plt.xlim(1,60) plt.ylim(1,9) plt.title('Mean Superstructure Rating Vs Age') plt.xlabel('Age') plt.ylabel('Mean Superstructure Rating') plt.figure(figsize = (16,12)) plt.xlabel('Age') plt.ylabel('Mean') # Initialize the figure plt.style.use('seaborn-darkgrid') # create a color palette #palette = plt.get_cmap('gist_ncar') palette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black', 'olive' ] # multiple line plot num=1 for column in df_mean_sup.drop('Age', axis=1): # Find the right spot on the plot plt.subplot(4,3, num) # Plot the lineplot plt.plot(df_mean_sup['Age'], df_mean_sup[column], marker='', color=palette[num], linewidth=4, alpha=0.9, label=column) # Same limits for everybody! plt.xlim(1,60) plt.ylim(1,9) # Not ticks everywhere if num in range(10) : plt.tick_params(labelbottom='off') if num not in [1,4,7,10]: plt.tick_params(labelleft='off') # Add title plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num]) plt.text(30, -1, 'Age', ha='center', va='center') plt.text(1, 4, 'Mean Superstructure Rating', ha='center', va='center', rotation='vertical') num = num + 1 # general title plt.suptitle("Mean Superstructure Rating vs Age \nIndividual State Deterioration Curves", fontsize=13, fontweight=0, color='black', style='italic', y=1.02) palette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red','silver','purple', 'gold', 'black','olive' ] plt.figure(figsize = (10,8)) index = 0 for state in states: index = index + 1 stateName = stateNameDict[state] plt.plot(df_mean_sup['Age'],df_mean_sup[stateName], color = palette[index]) plt.legend([stateNameDict[state] for state in states],loc='upper right', ncol = 2) plt.xlim(1,60) plt.ylim(1,9) plt.title('Mean Superstructure Rating Vs Age') plt.xlabel('Age') plt.ylabel('Mean Superstructure Rating') """ Explanation: Deterioration Curve - Superstructure End of explanation """ palette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red','silver','purple', 'gold', 'black','olive' ] plt.figure(figsize = (10,8)) index = 0 for state in states: index = index + 1 stateName = stateNameDict[state] plt.plot(df_mean_sub['Age'],df_mean_sub[stateName], color = palette[index], linewidth=4) plt.legend([stateNameDict[state] for state in states],loc='upper right', ncol = 2) plt.xlim(1,60) plt.ylim(1,9) plt.title('Mean Substructure Rating Vs Age') plt.xlabel('Age') plt.ylabel('Mean Substructure Rating') plt.figure(figsize = (16,12)) plt.xlabel('Age') plt.ylabel('Mean') # Initialize the figure plt.style.use('seaborn-darkgrid') # create a color palette palette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive' ] # multiple line plot num=1 for column in df_mean_sub.drop('Age', axis=1): # Find the right spot on the plot plt.subplot(4,3, num) # Plot the lineplot plt.plot(df_mean_sub['Age'], df_mean_sub[column], marker='', color=palette[num], linewidth=4, alpha=0.9, label=column) # Same limits for everybody! plt.xlim(1,60) plt.ylim(1,9) # Not ticks everywhere if num in range(7) : plt.tick_params(labelbottom='off') if num not in [1,4,7] : plt.tick_params(labelleft='off') # Add title plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num]) plt.text(30, -1, 'Age', ha='center', va='center') plt.text(1, 4, 'Mean Substructure Rating', ha='center', va='center', rotation='vertical') num = num + 1 # general title plt.suptitle("Mean Substructure Rating vs Age \nIndividual State Deterioration Curves", fontsize=13, fontweight=0, color='black', style='italic', y=1.02) def getDataOneYear(state): pipeline = [{"$match":{"$and":[{"year":{"$gt":2015, "$lt":2017}},{"stateCode":state}]}}, {"$project":{"_id":0, "Structure Type":"$structureTypeMain.typeOfDesignConstruction", "Type of Wearing Surface":"$wearingSurface/ProtectiveSystem.typeOfWearingSurface", "yearBuilt":1, "deck":1, ## rating of deck "year":1, ## survey year "substructure":1, ## rating of substructure "superstructure":1, ## rating of superstructure }}] dec = collection.aggregate(pipeline) conditionRatings = pd.DataFrame(list(dec)) conditionRatings['Age'] = conditionRatings['year'] - conditionRatings['yearBuilt'] return conditionRatings ## Condition ratings of all states concatenated into one single data frame ConditionRatings frames = [] for state in states: f = getDataOneYear(state) frames.append(f) df_nbi_se = pd.concat(frames) df_nbi_se = df_nbi_se.loc[~df_nbi_se['deck'].isin(['N','NA'])] df_nbi_se = df_nbi_se.loc[~df_nbi_se['substructure'].isin(['N','NA'])] df_nbi_se = df_nbi_se.loc[~df_nbi_se['superstructure'].isin(['N','NA'])] df_nbi_se = df_nbi_se.loc[~df_nbi_se['Type of Wearing Surface'].isin(['6'])] """ Explanation: Deterioration Curves - Substructure End of explanation """ stat = ['54','51','21','47','37','45','13','01','28','02','22','12'] AgeList = list(df_nbi_se['Age']) deckList = list(df_nbi_se['deck']) num = 1 for st in stat: deckR = [] deckR = getDataOneYear(st) deckR = deckR[['Age','deck']] deckR= deckR.loc[~deckR['deck'].isin(['N','NA'])] stateName = stateNameDict[st] labels = [] for deckRating, Age in zip (deckList,AgeList): if Age < 60: mean_age_conditionRating = df_mean_deck[stateName][Age] std_age_conditionRating = df_std_deck[stateName][Age] detScore = (int(deckRating) - mean_age_conditionRating) / std_age_conditionRating if (mean_age_conditionRating - std_age_conditionRating) < int(deckRating) <= (mean_age_conditionRating + std_age_conditionRating): # Append a label labels.append('Average Deterioration') # else, if more than a value, elif int(deckRating) > (mean_age_conditionRating + std_age_conditionRating): # Append a label labels.append('Slow Deterioration') # else, if more than a value, elif int(deckRating) < (mean_age_conditionRating - std_age_conditionRating): # Append a label labels.append('Fast Deterioration') else: labels.append('Null Value') D = dict((x,labels.count(x)) for x in set(labels)) plt.figure(figsize=(12,6)) plt.title(stateName) plt.bar(range(len(D)), list(D.values()), align='center') plt.xticks(range(len(D)), list(D.keys())) plt.xlabel('Categories') plt.ylabel('Number of Bridges') plt.show() num = num + 1 stat = ['54','51','21','47','37','45','13','01','28','02','22','12'] AgeList = list(df_nbi_se['Age']) deckList = list(df_nbi_se['deck']) num = 1 label = [] for st in stat: deckR = [] deckR = getDataOneYear(st) deckR = deckR[['Age','deck']] deckR= deckR.loc[~deckR['deck'].isin(['N','NA'])] stateName = stateNameDict[st] for deckRating, Age in zip (deckList,AgeList): if Age < 60: mean_age_conditionRating = df_mean_deck[stateName][Age] std_age_conditionRating = df_std_deck[stateName][Age] detScore = (int(deckRating) - mean_age_conditionRating) / std_age_conditionRating if (mean_age_conditionRating - std_age_conditionRating) < int(deckRating) <= (mean_age_conditionRating + std_age_conditionRating): # Append a label labels.append('Average Deterioration') # else, if more than a value, elif int(deckRating) > (mean_age_conditionRating + std_age_conditionRating): # Append a label labels.append('Slow Deterioration') # else, if more than a value, elif int(deckRating) < (mean_age_conditionRating - std_age_conditionRating): # Append a label labels.append('Fast Deterioration') else: labels.append('Null Value') """ Explanation: Classification Criteria The classfication criteria used to classify bridges into slow Deterioration, average deterioration and fast deterioration. Bridges are classified based on how far an individual bridge’s deterioration score is from the mean deterioration score. | Categories | Value | |------------------------|-------------------------------| | Slow Deterioration | $z_ia$ ​ ≥ $\bar x_a$ ​ + 1 σ ( $ x_a$ )​| | Average Deterioration| $\bar x_a$ ​ - 1 σ ( $x_a$ )​ ≥ $z_ia$ ≥ $\bar x_a$ ​ + 1 σ ( $ x_a$ )​ | | Fast Deterioration |$z_ia$ ​ ≤ $\bar x_a$ ​ - 1 σ ( $ x_a$ )​ | End of explanation """ D = dict((x,labels.count(x)) for x in set(labels)) plt.figure(figsize=(12,6)) plt.title('Classification of Bridges in Southeast United States') plt.bar(range(len(D)), list(D.values()), align='center') plt.xticks(range(len(D)), list(D.keys())) plt.xlabel('Categories of Bridges') plt.ylabel('Number of Bridges') plt.show() """ Explanation: Classification of all the bridges in the Southeast United States End of explanation """
ecabreragranado/OpticaFisicaII
Experimento de Young/ExperimentoYoung.ipynb
gpl-3.0
from IPython.display import Image Image(filename="YoungTwoSlitExperiment.JPG") """ Explanation: Experimento de Young End of explanation """ from IPython.display import Image Image(filename="ExperimentoYoung.jpg") """ Explanation: ''The experiments I am about to relate ... may be repeated with great ease, whenever the sun shines, and without any other apparatus than is at hand to everyone [1]'' Así comenzó Thomas Young su famoso experimento el 24 de noviembre de 1803 en la Real Sociedad de Londres. Ante una audencia mayoritariamente defensora de la teoría corpuscular de la luz (apoyada por Isaac Newton), Thomas Young llevó a cabo el primer experimento de interferencias de luz, demostrando la naturaleza ondulatoria de la luz. Dejó pasar un rayo de sol por un pequeño orificio de la ventana de la habitación e hizo incidir el haz de luz sobre el canto de una tarjeta diviendo el haz en dos. Estos dos haces al solaparse en una pantalla generaban unas franjas oscuras y brillantes de luz. [1] Thomas Young, "Experimental Demonstration of the General Law of the Interference of Light", Philosophical Transactions of the Royal Society of London vol. 94 (1804). Teoría Normalmente el experimento de Young se representa con una doble rendija, tal y como aparece en la figura anterior. Una onda esférica (o bien una onda plana, el tratamiento es equivalente), incide en una pantalla sobre la cual se han realizado dos aperturas $S_1$ y $S_2$ muy próximas entre sí (llamaremos a la distancia entre ellas $a$). Estas aperturas actúan como dos fuentes secundarias de radiación, generando a su vez dos ondas esféricas que se superponen en el espacio que hay detrás de ellas. Si observamos la distribución de irradiancia en una pantalla situada a una cierta distancia $D$, ¿qué nos encontraremos?. Las dos ondas que se generan en $S_1$ y $S_2$ pueden escribirse como: $$\vec{E_1} = \vec{e_1} \; E_{01} \; \; \cos\left( k r_1 - \omega t + \phi_1\right)$$ $$\vec{E_2} = \vec{e_2} \; E_{02} \; \; \cos\left( k r_2 - \omega t + \phi_2\right)$$ $E_{0j}$ es la amplitud de la onda, $\vec{e_j}$ es la dirección de vibración y $\phi_j$ es la fase inicial. $r_1$ ($r_2$) es el camino que recorre la onda desde $S_1$ ($S_2$) hasta el punto de observación P. Ambas ondas tienen la misma longitud de onda. La superposición de estas dos ondas, nos dará la expresión de la irradiancia ya conocida, $$I_T = I_1 + I_2 + 2 \sqrt{I_1 I_2} \; (\vec{e_ 1}\cdot\vec{e_ 2}) \; cos(\delta)$$ donde $\delta = k (r_2 - r_1) + \phi_2 - \phi_1$ es el desfase (o la diferencia de fase) entre las dos ondas. En esta expresión podemos hacer alguna que otra simplificación, $\vec{e_ 1} \cdot \vec{e_ 2}=1$ porque las ondas las consideramos polarizadas linealmente en la misma dirección. $I_1 = I_2$ en caso de que no haya ningún filtro en $S_1$ ó $S_2$, las dos ondas tienen la misma amplitud. $\phi_2 - \phi_1 = 0$ ya que el frente de ondas llega simultáneamente a $S_1$ y $S_2$. Nótese que si colocásemos por ejemplo una pieza de un material transparente antes de una de las dos aperturas, tendríamos un desfase adicional en una de las dos ondas y esta diferencia ya no sería nula. Esto ocurriría porque una de las ondas viajaría a través del material mientras la otra onda lo haría en aire. Así la irradiancia total queda $$I_T = 2 I_1 \left( 1 + cos(\delta) \; \right)$$ con $\delta = k (r_2 - r_1)$ Como vemos, es la diferencia de caminos $\Delta = r_2 - r_1$ la que determina el valor de la irradiancia final en el punto P. Vamos a calcularla. End of explanation """ from matplotlib.pyplot import * from numpy import * %matplotlib inline style.use('fivethirtyeight') ################################################################################### # PARÁMETROS. SE PUEDEN MODIFICAR SUS VALORES ################################################################################### Lambda =400e-9 # en metros, longitud de onda de la radiación D = 4.5 # en metros, distancia entre el plano que contiene las fuentes y la pantalla de observación a = 0.003 # en metros, separación entre fuentes ################################################################################### interfranja=Lambda*D/a # cálculo de la interfranja k = 2.0*pi/Lambda x = linspace(-5*interfranja,5*interfranja,500) I1 = 1 # Consideramos irradiancias normalizadas a un cierto valor. I2 = 0.01 X,Y = meshgrid(x,x) delta = k*a*X/D Itotal = I1 + I2 + 2.0*sqrt(I1*I2)*cos(delta) figure(figsize=(14,5)) subplot(121) pcolormesh(x*1e3,x*1e3,Itotal,cmap = 'gray',vmin=0,vmax=4) xlabel("x (mm)"); ylabel("y (mm)") subplot(122) plot(x*1e3,Itotal[x.shape[0]/2,:]) xlabel("x (mm)"); ylabel("Irradiancia total normalizada") """ Explanation: Según la figura, $\Delta = r_2 - r_1$ lo podemos escribir como $\Delta = a sen(\theta)$, siendo $a$ la separación entre las rendijas. Si éste ángulo es pequeño (lo que significa que la distancia entre las fuentes y la pantalla de observación sea grande comparada con la separación entre las fuentes), esta expresión la podemos simplificar, $$ \Delta = a sen(\theta) \simeq a tan(\theta) = a \frac{x}{D}$$. Y por tanto, $$\delta = k \frac{a x }{D} = \frac{2 \pi a x}{\lambda D}$$ En estas expresiones, $x$ es la distancia del punto P de observación al eje mientras que $D$ es la distancia entre el plano que contiene a las fuentes y la pantalla de observación, donde se encuentra P. Podemos reescribir la irradiancia total en la pantalla empleando la expresión calculada del desfase $$I_T = 2 I_1 \left( 1 + cos\left( \frac{2 \pi a x}{\lambda D} \right) \; \right)$$ Distribución de luz. Patrón de interferencias Ahora estamos en disposición de contestar a la pregunta que nos planteábamos antes, ¿cómo es la distribución de irradiancia en la pantalla de observación?. Vemos que el desfase depende de la altura en la pantalla $x$, por tanto al movernos en esa dirección el valor de la irradiancia cambiará. En particular el término que provoca esa variación es del tipo cosenoidal $cos( \frac{2 \pi a x}{\lambda D})$ por lo que veremos en la pantalla una distribución cosenoidal, con máximos de irradiancia cuando $\delta = 2 m \pi$, con $m = 0, \pm 1, \pm 2 ...$ y mínimos de irradiancia cuando $\delta = (2 m + 1) \pi$, con $m = 0, \pm 1, \pm 2 ...$. Las posiciones $x$ a las que corresponden estas condiciones serán, Máximos de irradiancia. $\delta = 2 m \pi \implies \Delta = m \lambda \implies$ $$x^{max}_m = \frac{m \lambda D}{a}$$ Mínimos de irradiancia. $\delta = (2 m + 1)\pi \implies \Delta = \frac{(2m +1) \lambda}{2} \implies$ $$x^{min}_m = \frac{(2m + 1) \lambda D}{2a}$$ Vamos a dibujar la distribución de irradiancia en la pantalla y un corte a lo largo del eje X (ejecutar la siguiente celda de código). End of explanation """ interfranja=Lambda*D/a # cálculo de la interfranja C = (Itotal.max() - Itotal.min())/(Itotal.max() + Itotal.min()) # cálculo del contraste print "a=",a*1e3,"mm ","D=",D,"m ","Longitud de onda=",Lambda*1e9,"nm" # valores de los parámetros print "Interfranja=",interfranja*1e3,"mm" # muestra el valor de la interfranja en mm print 'Contraste=',C # muestra el valor del contraste """ Explanation: Como podemos ver, los máximos están equiespaciados (lo mismo sucede con los míminos), siendo la distancia entre dos máximos consecutivos $$ \text{Interfranja} = \frac{\lambda D}{a} $$ Dicha magnitud se conoce con el nombre de interfranja y nos da información sobre el tamaño característico del patrón de franjas. Además del tamaño, para poder observar con claridad las franjas es necesario que estén bien contrastadas. Para ello se define el contraste o visibilidad de las franjas $$ C = \frac{I_T^{max}-I_T^{min}}{I_T^{max}+I_T^{min}}$$ que nos dice cuanto están separados los máximos de luz respecto de los mínimos. El valor de estas dos magnitudes para el caso representado en la figura anterior se muestra en la siguiente celda (ejecutar dicha celda después de haber ejecutado la anterior celda de código) End of explanation """ from IPython.display import Image Image(filename="FranjasYoungWhiteLight.jpg") """ Explanation: Cuestiones. Preguntas Emplear las dos celdas de código anteriores para analizar las siguientes cuestiones. Prueba a cambiar el valor de los parámetros $\lambda$, $D$ y $a$ (de uno en uno) y observa cómo cambia la distribución de luz. Observa como se modifica el valor de la interfranja y del contraste. Disminuye el valor de $I_1$ o $I_2$ y observa cómo cambia el patrón de interferencias. ¿Cómo cambia el valor de la interfranja y del contraste? <span style="color:blue">Pregunta Reto.</span> Supongamos que colocamos una pieza de un material transparente antes de la apertura $S_1$. Intenta explicar que consecuencias tendría sobre el patrón de interferencias. Si te atreves modifica la celda de código para incluir este efecto. Puedes usar un material de índice de refracción $n = 1.5$ y un espesor $d = 1$ mm. ¿Qué diferencia habría si en vez de iluminar con luz monocromática iluminamos con luz blanca? Podemos verlo en la siguiente imagen End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo("B34bAGtQL9A") """ Explanation: Como se puede observar, en el caso de luz blanca, cada una de las longitudes de onda que la componen forma un sistema de franjas con los máximos situados en posiciones distintas y con una interfranja diferente. Esto dificulta enormemente la visualización de la interferencia y nos llevará a definir el concepto de luz coherente e incoherente. Para saber un poco más. Otros recursos en la red. Video de la UNED sobre Thomas Young y su experimento. End of explanation """
mne-tools/mne-tools.github.io
0.21/_downloads/fa9fcfffc497146dd55d06cee6a5ec68/plot_creating_data_structures.ipynb
bsd-3-clause
import mne import numpy as np """ Explanation: Creating MNE's data structures from scratch MNE provides mechanisms for creating various core objects directly from NumPy arrays. End of explanation """ # Create some dummy metadata n_channels = 32 sampling_rate = 200 info = mne.create_info(n_channels, sampling_rate) print(info) """ Explanation: Creating :class:~mne.Info objects <div class="alert alert-info"><h4>Note</h4><p>for full documentation on the :class:`~mne.Info` object, see `tut-info-class`. See also `ex-array-classes`.</p></div> Normally, :class:mne.Info objects are created by the various data import functions. However, if you wish to create one from scratch, you can use the :func:mne.create_info function to initialize the minimally required fields. Further fields can be assigned later as one would with a regular dictionary. The following creates the absolute minimum info structure: End of explanation """ # Names for each channel channel_names = ['MEG1', 'MEG2', 'Cz', 'Pz', 'EOG'] # The type (mag, grad, eeg, eog, misc, ...) of each channel channel_types = ['grad', 'grad', 'eeg', 'eeg', 'eog'] # The sampling rate of the recording sfreq = 1000 # in Hertz # The EEG channels use the standard naming strategy. # By supplying the 'montage' parameter, approximate locations # will be added for them montage = 'standard_1005' # Initialize required fields info = mne.create_info(channel_names, sfreq, channel_types) info.set_montage(montage) # Add some more information info['description'] = 'My custom dataset' info['bads'] = ['Pz'] # Names of bad channels print(info) """ Explanation: You can also supply more extensive metadata: End of explanation """ # Generate some random data data = np.random.randn(5, 1000) # Initialize an info structure info = mne.create_info( ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'], ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'], sfreq=100) custom_raw = mne.io.RawArray(data, info) print(custom_raw) """ Explanation: <div class="alert alert-info"><h4>Note</h4><p>When assigning new values to the fields of an :class:`mne.Info` object, it is important that the fields are consistent: - The length of the channel information field ``chs`` must be ``nchan``. - The length of the ``ch_names`` field must be ``nchan``. - The ``ch_names`` field should be consistent with the ``name`` field of the channel information contained in ``chs``.</p></div> Creating :class:~mne.io.Raw objects To create a :class:mne.io.Raw object from scratch, you can use the :class:mne.io.RawArray class, which implements raw data that is backed by a numpy array. The correct units for the data are: V: eeg, eog, seeg, emg, ecg, bio, ecog T: mag T/m: grad M: hbo, hbr Am: dipole AU: misc The :class:mne.io.RawArray constructor simply takes the data matrix and :class:mne.Info object: End of explanation """ # Generate some random data: 10 epochs, 5 channels, 2 seconds per epoch sfreq = 100 data = np.random.randn(10, 5, sfreq * 2) # Initialize an info structure info = mne.create_info( ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'], ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'], sfreq=sfreq) """ Explanation: Creating :class:~mne.Epochs objects To create an :class:mne.Epochs object from scratch, you can use the :class:mne.EpochsArray class, which uses a numpy array directly without wrapping a raw object. The array must be of shape (n_epochs, n_chans, n_times). The proper units of measure are listed above. End of explanation """ # Create an event matrix: 10 events with alternating event codes events = np.array([ [0, 0, 1], [1, 0, 2], [2, 0, 1], [3, 0, 2], [4, 0, 1], [5, 0, 2], [6, 0, 1], [7, 0, 2], [8, 0, 1], [9, 0, 2], ]) """ Explanation: It is necessary to supply an "events" array in order to create an Epochs object. This is of shape (n_events, 3) where the first column is the sample number (time) of the event, the second column indicates the value from which the transition is made from (only used when the new value is bigger than the old one), and the third column is the new event value. End of explanation """ event_id = dict(smiling=1, frowning=2) """ Explanation: More information about the event codes: subject was either smiling or frowning End of explanation """ # Trials were cut from -0.1 to 1.0 seconds tmin = -0.1 """ Explanation: Finally, we must specify the beginning of an epoch (the end will be inferred from the sampling frequency and n_samples) End of explanation """ custom_epochs = mne.EpochsArray(data, info, events, tmin, event_id) print(custom_epochs) # We can treat the epochs object as we would any other _ = custom_epochs['smiling'].average().plot(time_unit='s') """ Explanation: Now we can create the :class:mne.EpochsArray object End of explanation """ # The averaged data data_evoked = data.mean(0) # The number of epochs that were averaged nave = data.shape[0] # A comment to describe to evoked (usually the condition name) comment = "Smiley faces" # Create the Evoked object evoked_array = mne.EvokedArray(data_evoked, info, tmin, comment=comment, nave=nave) print(evoked_array) _ = evoked_array.plot(time_unit='s') """ Explanation: Creating :class:~mne.Evoked Objects If you already have data that is collapsed across trials, you may also directly create an evoked array. Its constructor accepts an array of shape (n_chans, n_times) in addition to some bookkeeping parameters. The proper units of measure for the data are listed above. End of explanation """
kaizu/ecell4-lectures
lecture2.ipynb
mit
%matplotlib inline from ecell4 import * import matplotlib.pylab as plt import numpy as np import seaborn seaborn.set(font_scale=1.5) import matplotlib as mpl mpl.rc("figure", figsize=(6, 4)) """ Explanation: <p style="text-align:center">Lecture 2. 化学反応回路</p> <p style="text-align:center;font-size:150%;line-height:150%">海津一成</p> 化学反応を制御する機構 End of explanation """ def Hill(E, Km, nH): return E ** nH / (Km ** nH + E ** nH) data = np.array([[Hill(A, 0.5, 8) * Hill(B, 0.5, 8) for B in np.linspace(0, 1, 21)] for A in np.linspace(0, 1, 21)]) plt.imshow(data, cmap='coolwarm') plt.xlabel('A') plt.ylabel('B') plt.colorbar() plt.clim(0, 1) plt.show() """ Explanation: ブール演算 ANDゲート、ORゲート、NOTゲートといった基本論理回路が遺伝子発現系にもある ANDゲート 例えばANDゲートとは <table> <tr><th>A</th><th>B</th><th>A AND B</th></tr> <tr><td>0</td><td>0</td><td>0</td></tr> <tr><td>1</td><td>0</td><td>0</td></tr> <tr><td>0</td><td>1</td><td>0</td></tr> <tr><td>1</td><td>1</td><td>1</td></tr> </table> ふたつの入力のうち両方が活性でないと出力は活性化しない ふたつの制御を組み合せてみる $$\frac{\mathrm{d}[X]}{\mathrm{d}t}=f(A,B)-k[X]=\frac{[A]^{n_1}}{K_1^{n_1}+[A]^{n_1}}\times\frac{[B]^{n_2}}{K_2^{n_2}+[B]^{n_2}}-k[X]$$ AとBふたつのスイッチが入らなければ発現しない End of explanation """ with reaction_rules(): ~A > A | 0.2 ~B > B | 0.2 ~X > X | Hill(A, 0.2, 8) * Hill(B, 0.6, 8) X > ~X | 1.0 ~Y > Y | Hill(A, 0.2, 8) Y > ~Y | 1.0 run_simulation(5, species_list=['X', 'Y', 'A'], opt_args=['-', lambda t: 0.2, '--', lambda t: 0.6, '--']) """ Explanation: 試すと End of explanation """ def f(A, B, K1, K2, n1, n2): term1 = (A / K1) ** n1 term2 = (B / K2) ** n1 return (term1 + term2) / (1 + term1 + term2) """ Explanation: ORゲート ORゲートは <table> <tr><th>A</th><th>B</th><th>A OR B</th></tr> <tr><td>0</td><td>0</td><td>0</td></tr> <tr><td>1</td><td>0</td><td>1</td></tr> <tr><td>0</td><td>1</td><td>1</td></tr> <tr><td>1</td><td>1</td><td>1</td></tr> </table> 入力のうちどちらか一方があれば良い これは例えば $$\frac{\mathrm{d}[X]}{\mathrm{d}t}=f(A,B)-k[X]=\frac{\left([A]/K_1\right)^{n_1}+\left([B]/K_2\right)^{n_2}}{1+\left([A]/K_1\right)^{n_1}+\left([B]/K_2\right)^{n_2}}-k[X]$$ End of explanation """ data = np.array([[f(A, B, 0.5, 0.5, 8, 8) for B in np.linspace(0, 1, 21)] for A in np.linspace(0, 1, 21)]) plt.imshow(data, cmap='coolwarm') plt.xlabel('A') plt.ylabel('B') plt.colorbar() plt.clim(0, 1) plt.show() with reaction_rules(): ~A > A | 0.2 ~B > B | 0.2 ~X > X | f(A, B, 0.2, 0.6, 8, 8) X > ~X | 1.0 ~Y > Y | Hill(B, 0.6, 8) Y > ~Y | 1.0 run_simulation(5, species_list=['X', 'Y', 'A'], opt_args=['-', lambda t: 0.2, '--', lambda t: 0.6, '--']) """ Explanation: 複雑に見えるが、$K_1=K_2$かつ$n_1=n_2$の場合を考えればヒル式と同じ End of explanation """ def Hill_compl(E, Km, nH): return Km ** nH / (Km ** nH + E ** nH) """ Explanation: 実は分解を制御しても似たようなことはできる $$\frac{\mathrm{d}[X]}{\mathrm{d}t}=f(A)-g(B)[X]$$ $$=\frac{[A]^{n_1}}{K_1^{n_1}+[A]^{n_1}}-\left[k_1+k_2\frac{K_2^{n_2}}{K_2^{n_2}+[B]^{n_2}}\right][X]$$ End of explanation """ data = np.array([[Hill(A, 0.5, 8) / (1 + 100 * Hill_compl(B, 0.3, 8)) for B in np.linspace(0, 1, 21)] for A in np.linspace(0, 1, 21)]) plt.imshow(data, cmap='coolwarm') plt.xlabel('A') plt.ylabel('B') plt.colorbar() plt.clim(0, 1) plt.show() with reaction_rules(): ~A > A | 0.2 ~B > B | 0.2 ~X > X | Hill(A, 0.2, 8) X > ~X | (1.0 + 100 * Hill_compl(B, 0.36, 8)) * X ~Y > Y | Hill(A, 0.2, 8) Y > ~Y | 1.0 run_simulation(5, species_list=['X', 'Y', 'A'], opt_args=['-', lambda t: 0.2, '--', lambda t: 0.36, '--'], opt_kwargs={'ylim': (0, 1)}) """ Explanation: 上の式においてA、Bがそれぞれ0もしくは十分に存在する場合の定常状態を考えると <table> <tr><th>A</th><th>B</th><th>A OR B</th></tr> <tr><td>0</td><td>0</td><td>0</td></tr> <tr><td>$\infty$</td><td>0</td><td>$\frac{1}{k1+k2}$</td></tr> <tr><td>0</td><td>$\infty$</td><td>0</td></tr> <tr><td>$\infty$</td><td>$\infty$</td><td>$\frac{1}{k1}$</td></tr> </table> $k_2$が十分大きければ、ANDゲートが実現できる End of explanation """ x = np.linspace(0, 1, 101) nH = 8 plt.plot(x, [Hill(xi, 0.5, nH) for xi in x], label='Hill eq.') plt.plot(x, [Hill_compl(xi, 0.5, nH) for xi in x], label='Complementary Hill eq.') plt.legend(loc='best') plt.xlabel('INPUT') plt.ylabel('OUTPUT') plt.show() """ Explanation: NOTゲート 先ほどみたようにヒル式$\frac{[A]^n}{K^n+[A]^n}$の相補関数$\frac{K^n}{K^n+[A]^n}$ $\left(=1-\frac{[A]^n}{K^n+[A]^n}\right)$はちょうどNOTゲートのようなことができる End of explanation """ with reaction_rules(): A > B | 1 B > C | 1 run_simulation(5, {'A': 1}) """ Explanation: フィードバック制御 これまでは一方向の制御だったが回路のようにすると時間的な挙動を制御できるようになる 例えば $$A{\rightarrow}B{\rightarrow}C$$ のような反応だとAからBに流れてさらに下流のCへと蓄積していく End of explanation """ with reaction_rules(): A > B | 1 B > C | 1 obs = run_simulation(5, {'A': 1}, return_type='observer') with reaction_rules(): A > B | 1 * Hill_compl(C, 0.1, 8) * A B > C | 1 run_simulation(5, {'A': 1}, opt_args=('-', obs, '--')) """ Explanation: ここにCの量が多くなると$A{\rightarrow}B$の反応を抑制するように制御を加えてみると End of explanation """ with reaction_rules(): ~A > A | 1 A > B | 1 > C | 1 > ~C | 1 run_simulation(10) """ Explanation: 今度はさきほどのものに発現と分解を加えてみる $${\emptyset}{\rightarrow}A{\rightarrow}B{\rightarrow}C{\rightarrow}{\emptyset}$$ Aだけが発現して、Cだけが分解する End of explanation """ with reaction_rules(): ~A > A | Hill_compl(C, 0.5, 8) A > B | 1 > C | 1 > ~C | 1 run_simulation(16, opt_args=['-', lambda t: 0.5, '--'], opt_kwargs={'ylim': (0, 1)}) """ Explanation: 発現にCの量に応じた抑制を加えると End of explanation """ with reaction_rules(): ~A > A | Hill_compl(C, 0.3, 8) > ~A | 1 ~B > B | Hill_compl(A, 0.5, 8) > ~B | 1 ~C > C | Hill_compl(B, 0.7, 8) > ~C | 1 run_simulation(np.linspace(0, 14, 201)) """ Explanation: ネガティブフィードバック制御 さきほどのものは減衰していたが、ネガティブフィードバックを繰り返していくと安定な振動系にもできる 所謂、Repressilatorのようなもの(Wikipediaより) End of explanation """ with reaction_rules(): ~A > A | Hill_compl(C, 0.3, 4) > ~A | 1 ~B > B | Hill_compl(A, 0.5, 4) > ~B | 1 ~C > C | Hill_compl(B, 0.7, 4) > ~C | 1 run_simulation(np.linspace(0, 14, 201)) """ Explanation: この場合はヒル係数が小さくなる(スイッチがゆるくなる)と振動は消えてしまう End of explanation """ with reaction_rules(): A > B | 1 > C | 1 run_simulation(8, {'A': 1}) """ Explanation: フィードフォワード制御 逆に下流の反応を上流の分子によって制御することもある また単純な一連の反応を考える $$A{\rightarrow}B{\rightarrow}C$$ End of explanation """ with reaction_rules(): A > B | 1 > C | 1 obs = run_simulation(8, {'A': 1}, return_type='observer') with reaction_rules(): A > B | 1 > C | Hill_compl(A, 0.05, 8) * B run_simulation(8, {'A': 1}, opt_args=('-', obs, '--')) """ Explanation: ここにAの量が多いと$B{\rightarrow}C$の反応を抑制するように制御を加えてみると End of explanation """ with reaction_rules(): ~A > A | 1 > ~A | 1 ~B > B | Hill(A, 0.5, 8) > ~B | 1 ~C > C | Hill(A, 0.5, 8) * Hill_compl(B, 0.5, 8) > ~C | 1 run_simulation(5, opt_args=('-', lambda t: 0.5, '--')) """ Explanation: 制御の仕方を変えてみると Incoherent Feedforward Loop (FFL) End of explanation """ with reaction_rules(): ~B > B | Hill(A, 0.2, 8) > ~B | 1 ~C > C | Hill(A, 0.2, 8) * Hill(B, 0.5, 8) > ~C | 1 from ecell4_base.core import * from ecell4_base import ode m = get_model() w = ode.World() sim = ode.Simulator(w, m) obs = FixedIntervalNumberObserver(0.01, ['A', 'B', 'C']) sim.run(1, obs) w.set_value(Species('A'), 1); sim.initialize() sim.run(0.3, obs) w.set_value(Species('A'), 0) sim.initialize() sim.run(4, obs) w.set_value(Species('A'), 1) sim.initialize() sim.run(1.5, obs) w.set_value(Species('A'), 0) sim.initialize() sim.run(3.2, obs) viz.plot_number_observer(obs, '-', lambda t: 0.5, '--') """ Explanation: Coherent Feedforward Loop (FFL) End of explanation """
DOV-Vlaanderen/pydov
docs/notebooks/search_gecodeerde_lithologie.ipynb
mit
%matplotlib inline import inspect, sys # check pydov path import pydov """ Explanation: Example of DOV search methods for interpretations (gecodeerde lithologie) Use cases explained below Get 'gecodeerde lithologie' in a bounding box Get 'gecodeerde lithologie' with specific properties within a distance from a point Get 'gecodeerde lithologie' in a bounding box with specific properties Get 'gecodeerde lithologie' based on fields not available in the standard output dataframe Get 'gecodeerde lithologie' data, returning fields not available in the standard output dataframe End of explanation """ from pydov.search.interpretaties import GecodeerdeLithologieSearch itp = GecodeerdeLithologieSearch() """ Explanation: Get information about the datatype 'Gecodeerde lithologie' End of explanation """ itp.get_description() """ Explanation: A description is provided for the 'Gecodeerde lithologie' datatype: End of explanation """ fields = itp.get_fields() # print available fields for f in fields.values(): print(f['name']) """ Explanation: The different fields that are available for objects of the 'Gecodeerde lithologie' datatype can be requested with the get_fields() method: End of explanation """ fields['Datum'] """ Explanation: You can get more information of a field by requesting it from the fields dictionary: * name: name of the field * definition: definition of this field * cost: currently this is either 1 or 10, depending on the datasource of the field. It is an indication of the expected time it will take to retrieve this field in the output dataframe. * notnull: whether the field is mandatory or not * type: datatype of the values of this field End of explanation """ from pydov.util.location import Within, Box df = itp.search(location=Within(Box(153145, 206930, 153150, 206935))) df.head() """ Explanation: Example use cases Get 'Gecodeerde lithologie' in a bounding box Get data for all the 'Gecodeerde lithologie' interpretations that are geographically located within the bounds of the specified box. The coordinates are in the Belgian Lambert72 (EPSG:31370) coordinate system and are given in the order of lower left x, lower left y, upper right x, upper right y. End of explanation """ for pkey_interpretatie in set(df.pkey_interpretatie): print(pkey_interpretatie) """ Explanation: The dataframe contains one 'Gecodeerde lithologie' interpretation where five layers ('laag') were identified. The available data are flattened to represent unique attributes per row of the dataframe. Using the pkey_interpretatie field one can request the details of this interpretation in a webbrowser: End of explanation """ [i for i,j in inspect.getmembers(sys.modules['owslib.fes'], inspect.isclass) if 'Property' in i] """ Explanation: Get 'Gecodeerde lithologie' with specific properties within a distance from a point Next to querying interpretations based on their geographic location within a bounding box, we can also search for interpretations matching a specific set of properties. For this we can build a query using a combination of the 'Gecodeerde lithologie' fields and operators provided by the WFS protocol. A list of possible operators can be found below: End of explanation """ from owslib.fes import And, PropertyIsGreaterThan, PropertyIsEqualTo from pydov.util.location import WithinDistance, Point query = And([PropertyIsEqualTo(propertyname='Betrouwbaarheid', literal='goed'), PropertyIsGreaterThan(propertyname='diepte_tot_m', literal='20'), ]) df = itp.search(query=query, location=WithinDistance(Point(153145, 206930), 1000)) df.head() """ Explanation: In this example we build a query using the PropertyIsGreaterThan and PropertyIsEqualTo operators to find all interpretations that are at least 20 m deep, that are deemed appropriate for a range of 1 km from a defined point: End of explanation """ for pkey_interpretatie in set(df.pkey_interpretatie): print(pkey_interpretatie) """ Explanation: Once again we can use the pkey_interpretatie as a permanent link to the information of these interpretations: End of explanation """ from owslib.fes import PropertyIsEqualTo query = PropertyIsEqualTo( propertyname='Type_proef', literal='Boring') df = itp.search( location=Within(Box(153145, 206930, 154145, 207930)), query=query ) df.head() """ Explanation: Get 'Gecodeerde lithologie' in a bounding box based on specific properties We can combine a query on attributes with a query on geographic location to get the interpretations within a bounding box that have specific properties. The following example requests the interpretations of boreholes only, within the given bounding box. (Note that the datatype of the literal parameter should be a string, regardless of the datatype of this field in the output dataframe.) End of explanation """ for pkey_interpretatie in set(df.pkey_interpretatie): print(pkey_interpretatie) """ Explanation: We can look at one of the interpretations in a webbrowser using its pkey_interpretatie: End of explanation """ from owslib.fes import And, PropertyIsEqualTo, PropertyIsLessThan query = And([PropertyIsEqualTo(propertyname='gemeente', literal='Antwerpen'), PropertyIsLessThan(propertyname='Datum', literal='2010-01-01')] ) df = itp.search(query=query, return_fields=('pkey_interpretatie', 'Datum')) df.head() """ Explanation: Get 'Gecodeerde lithologie' based on fields not available in the standard output dataframe To keep the output dataframe size acceptable, not all available WFS fields are included in the standard output. However, one can use this information to select interpretations as illustrated below. For example, make a selection of the interpretations in municipality the of Antwerp, before 1/1/1990: !remark: mind that the municipality attribute is merely an attribute that is defined by the person entering the data. It can be ok, empty, outdated or wrong! End of explanation """ query = PropertyIsEqualTo( propertyname='gemeente', literal='Leuven') df = itp.search(query=query, return_fields=('pkey_interpretatie', 'pkey_boring', 'x', 'y', 'Z_mTAW', 'gemeente', 'Auteurs', 'Proefnummer')) df.head() """ Explanation: Get 'Gecodeerde lithologie' data, returning fields not available in the standard output dataframe As denoted in the previous example, not all available fields are available in the default output frame to keep its size limited. However, you can request any available field by including it in the return_fields parameter of the search: End of explanation """ # import the necessary modules (not included in the requirements of pydov!) import folium from folium.plugins import MarkerCluster from pyproj import Transformer # convert the coordinates to lat/lon for folium def convert_latlon(x1, y1): transformer = Transformer.from_crs("epsg:31370", "epsg:4326", always_xy=True) x2,y2 = transformer.transform(x1, y1) return x2, y2 df['lon'], df['lat'] = zip(*map(convert_latlon, df['x'], df['y'])) # convert to list loclist = df[['lat', 'lon']].values.tolist() # initialize the Folium map on the centre of the selected locations, play with the zoom until ok fmap = folium.Map(location=[df['lat'].mean(), df['lon'].mean()], zoom_start=12) marker_cluster = MarkerCluster().add_to(fmap) for loc in range(0, len(loclist)): folium.Marker(loclist[loc], popup=df['Proefnummer'][loc]).add_to(marker_cluster) fmap """ Explanation: Visualize results Using Folium, we can display the results of our search on a map. End of explanation """
mangeshjoshi819/ml-learn-python3
Some Advanced Python.ipynb
mit
class Person: ##SCOPE OF CLASS DOWN BELOW institute="IIT" def __init__(self,name,department): self.name=name self.department=department def getName(self): return self.name p=Person("mangesh","physics") p.getName() """ Explanation: Advance Python Define class class variables Self is necessary for method signature and instance variable: like self.instanceVaraible Explicit init constructor is not need End of explanation """ help(map) store1=[10,12,8,3,5] store2=[5,8,12,3,6] cheapest=map(min,store1,store2) cheapest """ Explanation: MAP FUCNTION Some functional programming operations map(function,interables,iterables,iterables..........) End of explanation """ for c in cheapest: print(c) people = ['Chu. Mangesh Joshi', 'Chu. Harsh Ranjan', 'Dr. Manoj Pawar', 'Dr. Ashish Shettywar'] def split_title_and_name(person): return person.split(" ")[0]+person.split(" ")[-1] list(map(split_title_and_name,people)) """ Explanation: We get lazy evaluation here and so map is not computed here so we get just the map object back End of explanation """ my_func=lambda a,b:a*b a=my_func(2,3) a for p in people: print( (lambda person:split_title_and_name(person))(p) ) list(map(split_title_and_name, people)) == list(map( (lambda person:person.split()[0] + person.split()[-1]),people)) """ Explanation: Lambda : Anonymous Functions Lambda should not used for complex functions, however very handy for writing simple functions instead of defining funvtions End of explanation """ ##Method 1 #Taking range as 10 to avoid a lot of printing l=[] for i in range(1,10): if i%2==0: l.append(i) print(l) #Method 2 ## format is [ i for expressions [conditionals]] print([ i for i in range(1,10) if i%2==0]) ## convert this function list comprehensions: def times_tables(): lst = [] for i in range(10): for j in range (10): lst.append(i*j) return lst l=[i*j for i in range(10) for j in range(10)] """ Explanation: Example of list comprehension Suppose we want to get a list of even numbers between 1-1000 there are various ways of doing it,shoiwng two ways, one with normal loops and other with list comprehension End of explanation """
qdev-dk/Majorana
examples/Qcodes example with Alazar ATS9360.ipynb
gpl-3.0
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import qcodes as qc import qcodes.instrument.parameter as parameter import qcodes.instrument_drivers.AlazarTech.ATS9360 as ATSdriver from qdev_wrappers.alazar_controllers.ATSChannelController import ATSChannelController from qdev_wrappers.alazar_controllers.alazar_channel import AlazarChannel #import qcodes.instrument_drivers.AlazarTech.acq_helpers as helpers from qcodes.station import Station import logging logging.basicConfig(level=logging.INFO) from qcodes.instrument.parameter import ManualParameter import qcodes """ Explanation: Qcodes example notebook for Alazar card ATS9360 and acq controllers End of explanation """ # Create the ATS9360 instrument alazar = ATSdriver.AlazarTech_ATS9360(name='Alazar') # Print all information about this Alazar card alazar.get_idn() # Configure all settings in the Alazar card alazar.config(clock_source='INTERNAL_CLOCK', sample_rate=1_000_000_000, clock_edge='CLOCK_EDGE_RISING', decimation=1, coupling=['DC','DC'], channel_range=[.4,.4], impedance=[50,50], trigger_operation='TRIG_ENGINE_OP_J', trigger_engine1='TRIG_ENGINE_J', trigger_source1='EXTERNAL', trigger_slope1='TRIG_SLOPE_POSITIVE', trigger_level1=160, trigger_engine2='TRIG_ENGINE_K', trigger_source2='DISABLE', trigger_slope2='TRIG_SLOPE_POSITIVE', trigger_level2=128, external_trigger_coupling='DC', external_trigger_range='ETR_2V5', trigger_delay=0, timeout_ticks=0, aux_io_mode='AUX_IN_AUXILIARY', # AUX_IN_TRIGGER_ENABLE for seq mode on aux_io_param='NONE' # TRIG_SLOPE_POSITIVE for seq mode on ) """ Explanation: NB: See ATS9360 example notebook for general commands End of explanation """ # Create the acquisition controller which will take care of the data handling and tell it which # alazar instrument to talk to. Explicitly pass the default options to the Alazar. # Dont integrate over samples but avarage over records myctrl = ATSChannelController(name='my_controller', alazar_name='Alazar') """ Explanation: Example 1 Pulls the raw data the alazar acquires averaged over records and buffers. End of explanation """ station = qc.Station(alazar, myctrl) """ Explanation: Put the Alazar and the controller in a station so we ensure that all parameters are captured End of explanation """ myctrl.int_time.set? myctrl.int_time._latest myctrl.int_delay(2e-7) myctrl.int_time(2e-6) print(myctrl.samples_per_record()) #myctrl.num_avg(1000) """ Explanation: This controller is designed to be highlevel and it is not possible to directly set number of records, buffers and samples. The number of samples is indirecly controlled by the integration time and integration delay and the number of averages controls the number of buffers and records acquired End of explanation """ myctrl.channels """ Explanation: Per default the controller does not have any channels assiated with it. End of explanation """ chan1 = AlazarChannel(myctrl, 'mychan', demod=False, integrate_samples=False) myctrl.channels.append(chan1) chan1.num_averages(1000) chan1.alazar_channel('A') chan1.prepare_channel() # Measure this data1 = qc.Measure(chan1.data).run() qc.MatPlot(data1.my_controller_mychan_data) """ Explanation: 1D samples trace Lets define a channel were we avarege over buffers and records but not over samples. This will give us a time series with a x axis defined by int_time, int_delay and the sampling rate. First we create a channel and set the relevant parameters. We may choose to append the channel to the controllers build in list of channels for future reference. End of explanation """ %%time qc.Measure(chan1.data).run() """ Explanation: We can measure the time taken to do a measurement End of explanation """ chan1d = AlazarChannel(myctrl, 'mychan_demod_1', demod=True, integrate_samples=False) myctrl.channels.append(chan1d) chan1d.num_averages(1000) chan1d.alazar_channel('A') chan1d.demod_freq(1e6) chan1d.demod_type('magnitude') chan1d.prepare_channel() # Measure this data1d = qc.Measure(chan1d.data).run() qc.MatPlot(data1d.my_controller_mychan_demod_1_data) """ Explanation: Demodulation We may optionally chose to demodulate the data that we acquire using a software demodulator End of explanation """ chan1d2 = AlazarChannel(myctrl, 'mychan_demod_2', demod=True, integrate_samples=False) myctrl.channels.append(chan1d2) chan1d2.num_averages(1000) chan1d2.alazar_channel('A') chan1d2.demod_freq(2e6) chan1d2.demod_type('magnitude') chan1d2.prepare_channel() # Measure this data1d = qc.Measure(chan1d2.data).run() qc.MatPlot(data1d.my_controller_mychan_demod_2_data) myctrl.channels """ Explanation: We are free to add more demodulators with different frequencies End of explanation """ %%time data = qc.Measure(myctrl.channels.data).run() data1 = qc.Measure(myctrl.channels.data).run() plot = qc.MatPlot() plot.add(data.my_controller_mychan_data) plot.add(data.my_controller_mychan_demod_1_data) plot.add(data.my_controller_mychan_demod_2_data) """ Explanation: We can get the data from multiple chanels in one provided that the shape (buffers,records,samples) is the same, The time overhead is fairly small as we are only capturing the data once. End of explanation """ chan2 = AlazarChannel(myctrl, 'myrecchan', demod=False, average_records=False) myctrl.channels.append(chan2) chan2.num_averages(100) chan2.records_per_buffer(55) chan2.alazar_channel('A') chan2.prepare_channel() # Measure this data2 = qc.Measure(myctrl.channels[-1].data).run() qc.MatPlot(data2.my_controller_myrecchan_data) """ Explanation: 1D records trace We can also do a 1D trace of records End of explanation """ chan2d = AlazarChannel(myctrl, 'myrecchan_D', demod=True, average_records=False) myctrl.channels.append(chan2d) print(myctrl.int_delay()) print(myctrl.int_time()) myctrl.int_time._latest chan2d.alazar_channel('A') chan2d.demod_freq(1e6) chan2d.demod_type('magnitude') chan2d.num_averages(100) chan2d.records_per_buffer(55) chan2d.alazar_channel('A') chan2d.prepare_channel() # Measure this data2d = qc.Measure(myctrl.channels[-1].data).run() qc.MatPlot(data2d.my_controller_myrecchan_D_data) myctrl.channels myctrl.channels[-2:] data = qc.Measure(myctrl.channels[-2:].data).run() plot = qc.MatPlot() plot.add(data.my_controller_myrecchan_data ) plot.add(data.my_controller_myrecchan_D_data) """ Explanation: Again it is posssible to demodulate the data End of explanation """ chan3 = AlazarChannel(myctrl, 'myrecchan', demod=False, average_buffers=False) myctrl.channels.append(chan3) chan3.num_averages(100) chan3.buffers_per_acquisition(100) chan3.alazar_channel('A') alazar.buffer_timeout._set(10000) alazar.buffer_timeout._set_updated() chan3.prepare_channel() # Measure this data3 = qc.Measure(chan3.data).run() qc.MatPlot(data3.my_controller_myrecchan_data) print(alazar.buffer_timeout()) """ Explanation: 1D Buffer trace We can also do a 1D trace over buffers in the same way End of explanation """ chan3d = AlazarChannel(myctrl, 'myrecchan_d', demod=True, average_buffers=False) myctrl.channels.append(chan3d) chan3d.num_averages(100) chan3d.buffers_per_acquisition(100) chan3d.alazar_channel('A') chan3d.demod_freq(2e6) chan3d.demod_type('magnitude') alazar.buffer_timeout._set(10000) alazar.buffer_timeout._set_updated() chan3d.prepare_channel() # Measure this data3 = qc.Measure(chan3d.data).run() qc.MatPlot(data3.my_controller_myrecchan_d_data) print(alazar.buffer_timeout()) data = qc.Measure(myctrl.channels[-2:].data).run() plot = qc.MatPlot() plot.add(data.my_controller_myrecchan_data) plot.add(data.my_controller_myrecchan_d_data) """ Explanation: And demodulate this End of explanation """ chan4 = AlazarChannel(myctrl, 'myrecvssamples', demod=False, average_records=False, integrate_samples=False) myctrl.channels.append(chan4) chan4.num_averages(1) chan4.records_per_buffer(100) chan4.alazar_channel('A') chan4.prepare_channel() # Measure this data4 = qc.Measure(chan4.data).run() qc.MatPlot(data4.my_controller_myrecvssamples_data) """ Explanation: 2D Samples vs records End of explanation """ chan5 = AlazarChannel(myctrl, 'mybuffersvsrecs', demod=False, average_records=False, average_buffers=False) alazar.buffer_timeout._set(10000) chan5.records_per_buffer(72) chan5.buffers_per_acquisition(10) chan5.num_averages(1) chan5.alazar_channel('A') chan5.prepare_channel() # Measure this data5 = qc.Measure(chan5.data).run() qc.MatPlot(data5.my_controller_mybuffersvsrecs_data) print(alazar.buffer_timeout()) """ Explanation: 2D Buffers vs Records End of explanation """ chan6 = AlazarChannel(myctrl, 'mybufvssamples', demod=False, average_buffers=False, integrate_samples=False) chan6.buffers_per_acquisition(100) chan6.num_averages(100) chan6.alazar_channel('A') chan6.prepare_channel() # Measure this data6 = qc.Measure(chan6.data).run() plot = qc.MatPlot(data6.my_controller_mybufvssamples_data) """ Explanation: 2D Buffers vs Samples End of explanation """ chan7 = AlazarChannel(myctrl, 'mybufvssamples', demod=False) chan7.num_averages(100) chan7.alazar_channel('A') chan7.prepare_channel() # Measure this data7 = qc.Measure(chan7.data).run() """ Explanation: Single point End of explanation """ chan1 = AlazarChannel(myctrl, 'mychan1', demod=False, integrate_samples=False) chan1.num_averages(1000) chan1.alazar_channel('A') chan1.prepare_channel() chan2 = AlazarChannel(myctrl, 'mychan2', demod=False, integrate_samples=False) chan2.num_averages(1000) chan2.alazar_channel('B') chan2.prepare_channel() myctrl.channels.append(chan1) myctrl.channels.append(chan2) #plot = qc.MatPlot(data6.my_controller_mybufvssamples_data) data7 = qc.Measure(myctrl.channels[-2:].data).run() plot = qc.MatPlot(data7.my_controller_mychan1_data, data7.my_controller_mychan2_data) """ Explanation: As we are not integrating over samples the setpoints (label, unit and ticks on number) are automatically set from the integration time and integration delay. Note at the moment this does not cut of the int_delay from the plot. It probably should Multiple channels End of explanation """
rcrehuet/Python_for_Scientists_2017
notebooks/extras/Numpy arrays. Data manipulation.ipynb
gpl-3.0
!head ../../data/profasi/n0/rt """ Explanation: Numpy arrays. Data manipulation In this notebook we are going to work with some numerical data that we need to re-format. Profasi is a Monte Carlo code for protein simulation. It can run Parallel Tempering simulations where each replica runs in a processor and exchanges temperatures with the other replicas. It is done this way because it is more efficient to exchange temperatures than to exchange molecular coordiantes between processors. The problem is that in the output files the temperatures are mixed. Usually one needs the data for all the sampled molecules at a the same temperature. The above explanation is just to say that we need to reorder the data from different files and generated re-ordered files. All the our input files are called rt and each resides in a directory called ni where i corresponds to the processor. If the simulation was run with 16 processors i will go from 0 to 15. The rt file is a text file that contains just columns of numbers, separated by spaces. Here is how it looks like: End of explanation """ import numpy as np import glob """ Explanation: The first column indicate the iteration or cycle, the second the temperature. The remaining are energy components and other observables. The temperature is an integer. In a different file one an find the conversion to kelvin. We start with some setup... End of explanation """ files = glob.glob('../../data/profasi/n*/rt') """ Explanation: Use glob.glob to get a list of all the input rt files. End of explanation """ all_enes = [] for filein in files: print("Reading..... ", filein) all_enes.append(np.loadtxt(filein)) all_enes = np.asarray(all_enes) """ Explanation: We now read all the rt files into memory. If they were too big, we should think of a more efficient way to process them (maybe using memmap or pytables). In our case they are small enough. As these are numeric files, we use loadtxt to automatically generte an array. As different files will generate different arrays, we collect them in a list that we finally transform into an array. End of explanation """ all_enes.shape """ Explanation: This is the shape of the resulting array: End of explanation """ all_enes=all_enes.reshape((-1,all_enes.shape[2])) """ Explanation: Now we need to reshape it so that it contains 2 dimensions with all the raws and the 14 columns: End of explanation """ all_enes = None for filein in files: print("Reading..... ", filein) if all_enes is not None: all_enes= np.r_[all_enes, np.loadtxt(filein)] else: all_enes= np.loadtxt(filein) """ Explanation: Alternatively, we could have concatenated all the rows into the first loaded array. Here is a way to do that: End of explanation """ temperatures = set(all_enes[:,1]) """ Explanation: Now we need to get the temperatures. We know that there are as many temperatures as nodes, so this would work: temperatures = np.arange(len(files)) However, image that for some purpose we did not process all the ni directories, but only a fraction of them. We can still get all the temperatures from the rt files. It corresponds to the 2nd column. A simple way to get the temperatures is with a set. End of explanation """ temperatures = np.unique(all_enes[:,1]) %timeit set(all_enes[:,1]) %timeit np.unique(all_enes[:,1]) """ Explanation: We can also use np.unique to the the unique values of an array or part of it. A little bit more efficient... (check it with %timeit). End of explanation """ ene_temp = np.zeros_like(all_enes) ene_temp = ene_temp.reshape([len(temperatures), -1, all_enes.shape[1]]) for ti in temperatures.astype(int): ene_temp[ti] = all_enes[all_enes[:,1]==ti, :] """ Explanation: Now we eneed to extract the energies from the array based on the temperature value, and create separate sub-arrays. Array elements can be selected with Boolean arrays. This is called fancy indexing. We start by difining an empty array and then fill it in: End of explanation """ import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) plt.plot(ene_temp[5,:,0], ene_temp[5,:,2]) """ Explanation: The last step is to keep only those energy values that are beyond the equilibration point. So we want only to keep data from, a certain point. Let's plot the energy vs. iteration to see how it looks line. We'll plot temperature 5 as this is the lowest temperature (Profasi order from high to low). End of explanation """ plt.plot(ene_temp[5,:,0],'x') """ Explanation: Ugly, isn't it? The array is not ordered by iteration. The order can be seen here: End of explanation """ order = ene_temp[:, :, 0].argsort() ene_temp = ene_temp[np.arange(ene_temp.shape[0])[:, np.newaxis], order] plt.subplot(2,1,1) plt.plot(ene_temp[5,:,0],'x') plt.subplot(2,1,2) plt.plot(ene_temp[5,:,0], ene_temp[5,:,2]) """ Explanation: We have the structures at the correct temperature but still ordered from the 6 replicas that were running. Let's order them with respect to the first column. We cannot use sort here, because we want to use the order of the first column to order all the raw elements. We can get that order with argsort and then apply it to the array. The problem is that the sizes of the sorting order do no agree with the sizes of ene_temp. To solve that we need to do a trick which I pesonally find very cumbersome. The reason is we need to broadcas correctly the dimensions of order into ene_temp. It's simplers to understand if you see that for the first temperatures we want: ene_temp[0, order[0]] ene_temp[1, order[1]] and so on. It would seem that ene_temp[:, order[:]] but this performs the broadcasting in the wrong axis. Instead, we need to transpose the first axis, because what we are actually doing is: ene_temp[[0], order[0]] ene_temp[[1], order[1]] And this can be done creating the vector [[0], [1], [2]... which is done with: np.arange(ene_temp.shape[0])[:, np.newaxis]. End of explanation """ np.save('energies_temperatures', ene_temp[ene_temp[:,:,0]>2000, :], ) """ Explanation: Finally we can select and save the submatrix from iteration 2000 onwards: End of explanation """ %%timeit for ti in temperatures.astype(int): ene_temp[ti] = all_enes[all_enes[:,1]==ti, :] """ Explanation: Advanced Topic: optimizing with numba In the previous section, if there were $N$ temperatures we cycles the all_enes array $N$ times, which is not very efficient. We could potentially make if faster by running this in a single step. We create an empty arrray and fill it in with the correct values. We first time our initial approach: End of explanation """ ene_temp2 = np.zeros_like(ene_temp) filled = np.zeros(len(temperatures), np.int) for row in all_enes: ti = int(row[1]) ene_temp2[ti, filled[ti]] = row filled[ti] +=1 """ Explanation: Now we loop though all the rows and put each row to its corrrect first axis dimension. We need to keep an array of the filled positions. End of explanation """ np.all(ene_temp2==ene_temp) %%timeit filled = np.zeros(len(temperatures), np.int) for row in all_enes: ti = int(row[1]) ene_temp2[ti, filled[ti]] = row filled[ti] +=1 """ Explanation: We check we are still getting the same result, and we time it: End of explanation """ import numba """ Explanation: Good (ironically)! Two orders of magnitude slower than the first approach... The reason is that in the previous approach we were using numpy fast looping abilities, whereas now the loops are implemented in pure python and therefore are much slower. This is the typical case where numba can increase the performance of such loops. End of explanation """ def get_temperatures(array_in, array_out, filled): for r in range(array_in.shape[0]): ti = int(array_in[r,1]) for j in range(array_in.shape[1]): array_out[ti, filled[ti], j] = array_in[r,j] filled[ti] +=1 return array_out %%timeit num_temp = len(temperatures) m = all_enes.shape[0] n = all_enes.shape[1] m = m // num_temp ene_temp = np.zeros((num_temp, m,n )) filled = np.zeros(num_temp, np.int) get_temperatures(all_enes, ene_temp, filled) """ Explanation: We first write a function of our lines. We avoid creating arrays into that function as those cannot be optimized with numba. We test our approach and check that the timings are the same. End of explanation """ numba_get_temperatures = numba.jit(get_temperatures,nopython=True) %%timeit num_temp = len(temperatures) m = all_enes.shape[0] n = all_enes.shape[1] m = m // num_temp ene_temp3 = np.zeros((num_temp, m,n )) filled = np.zeros(num_temp, np.int) numba_get_temperatures(all_enes, ene_temp3, filled) """ Explanation: Now we can pass this function to numba. The nopython option tell numba not to create object code which is as slow as python code. That is why we created the arrays outside the function. We also check the timings. End of explanation """ @numba.jit def numba2_get_temperatures(array_in, num_temp): m = all_enes.shape[0] n = all_enes.shape[1] m = m // num_temp array_out = np.zeros((num_temp, m,n )) filled = np.zeros(num_temp, np.int) for r in range(array_in.shape[0]): ti = int(array_in[r,1]) for j in range(array_in.shape[1]): array_out[ti, filled[ti], j] = array_in[r,j] filled[ti] +=1 return array_out %%timeit num_temp = len(temperatures) ene_temp4 = numba2_get_temperatures(all_enes, num_temp) np.all(numba2_get_temperatures(all_enes, num_temp)==ene_temp) """ Explanation: Wow! Three orders of magnitude faster than the python and one order faster than our original numpy code (with only 6 temperatures!). But having to declare all arrays outside is ugly. Is there a workaroud? Yes! Numba is clever enough to separate the loops from the array creation, and optimize the loops. This called loop-lifting or loop-jitting. We need to remove the nopython option as part of the code will be object like, but we see that it is as efficient as before. Here we also use a decoratior instead of a function call. The results are exactly the same, it just gives a shorter syntax. End of explanation """
probml/pyprobml
notebooks/book1/15/cnn1d_sentiment_torch.ipynb
mit
import numpy as np import matplotlib.pyplot as plt import math from IPython import display try: import torch except ModuleNotFoundError: %pip install -qq torch import torch from torch import nn from torch.nn import functional as F from torch.utils import data import collections import re import random import os import requests import zipfile import tarfile import hashlib import time np.random.seed(seed=1) torch.manual_seed(1) !mkdir figures # for saving plots """ Explanation: Please find jax implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/15/cnn1d_sentiment_jax.ipynb <a href="https://colab.research.google.com/github/Nirzu97/pyprobml/blob/cnn1d-sentiment-torch/notebooks/cnn1d_sentiment_torch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> 1d CNNs for sentiment classification We use 1d CNNs for IMDB movie review classification. Based on sec 15.3 of http://d2l.ai/chapter_natural-language-processing-applications/sentiment-analysis-cnn.html End of explanation """ # Required functions for downloading data def download(name, cache_dir=os.path.join("..", "data")): """Download a file inserted into DATA_HUB, return the local filename.""" assert name in DATA_HUB, f"{name} does not exist in {DATA_HUB}." url, sha1_hash = DATA_HUB[name] os.makedirs(cache_dir, exist_ok=True) fname = os.path.join(cache_dir, url.split("/")[-1]) if os.path.exists(fname): sha1 = hashlib.sha1() with open(fname, "rb") as f: while True: data = f.read(1048576) if not data: break sha1.update(data) if sha1.hexdigest() == sha1_hash: return fname # Hit cache print(f"Downloading {fname} from {url}...") r = requests.get(url, stream=True, verify=True) with open(fname, "wb") as f: f.write(r.content) return fname def download_extract(name, folder=None): """Download and extract a zip/tar file.""" fname = download(name) base_dir = os.path.dirname(fname) data_dir, ext = os.path.splitext(fname) if ext == ".zip": fp = zipfile.ZipFile(fname, "r") elif ext in (".tar", ".gz"): fp = tarfile.open(fname, "r") else: assert False, "Only zip/tar files can be extracted." fp.extractall(base_dir) return os.path.join(base_dir, folder) if folder else data_dir def read_imdb(data_dir, is_train): data, labels = [], [] for label in ("pos", "neg"): folder_name = os.path.join(data_dir, "train" if is_train else "test", label) for file in os.listdir(folder_name): with open(os.path.join(folder_name, file), "rb") as f: review = f.read().decode("utf-8").replace("\n", "") data.append(review) labels.append(1 if label == "pos" else 0) return data, labels """ Explanation: Data We use IMDB dataset. Details in this colab. End of explanation """ def tokenize(lines, token="word"): """Split text lines into word or character tokens.""" if token == "word": return [line.split() for line in lines] elif token == "char": return [list(line) for line in lines] else: print("ERROR: unknown token type: " + token) class Vocab: """Vocabulary for text.""" def __init__(self, tokens=None, min_freq=0, reserved_tokens=None): if tokens is None: tokens = [] if reserved_tokens is None: reserved_tokens = [] # Sort according to frequencies counter = count_corpus(tokens) self.token_freqs = sorted(counter.items(), key=lambda x: x[1], reverse=True) # The index for the unknown token is 0 self.unk, uniq_tokens = 0, ["<unk>"] + reserved_tokens uniq_tokens += [token for token, freq in self.token_freqs if freq >= min_freq and token not in uniq_tokens] self.idx_to_token, self.token_to_idx = [], dict() for token in uniq_tokens: self.idx_to_token.append(token) self.token_to_idx[token] = len(self.idx_to_token) - 1 def __len__(self): return len(self.idx_to_token) def __getitem__(self, tokens): if not isinstance(tokens, (list, tuple)): return self.token_to_idx.get(tokens, self.unk) return [self.__getitem__(token) for token in tokens] def to_tokens(self, indices): if not isinstance(indices, (list, tuple)): return self.idx_to_token[indices] return [self.idx_to_token[index] for index in indices] def count_corpus(tokens): """Count token frequencies.""" # Here `tokens` is a 1D list or 2D list if len(tokens) == 0 or isinstance(tokens[0], list): # Flatten a list of token lists into a list of tokens tokens = [token for line in tokens for token in line] return collections.Counter(tokens) def set_figsize(figsize=(3.5, 2.5)): """Set the figure size for matplotlib.""" display.set_matplotlib_formats("svg") plt.rcParams["figure.figsize"] = figsize """ Explanation: We tokenize using words, and drop words which occur less than 5 times in training set when creating the vocab. End of explanation """ def truncate_pad(line, num_steps, padding_token): """Truncate or pad sequences.""" if len(line) > num_steps: return line[:num_steps] # Truncate return line + [padding_token] * (num_steps - len(line)) """ Explanation: We pad all sequences to length 500, for efficient minibatching. End of explanation """ def load_array(data_arrays, batch_size, is_train=True): """Construct a PyTorch data iterator.""" dataset = data.TensorDataset(*data_arrays) return data.DataLoader(dataset, batch_size, shuffle=is_train) """ Explanation: Data iterator. End of explanation """ def load_data_imdb(batch_size, num_steps=500): data_dir = download_extract("aclImdb", "aclImdb") train_data = read_imdb(data_dir, True) test_data = read_imdb(data_dir, False) train_tokens = tokenize(train_data[0], token="word") test_tokens = tokenize(test_data[0], token="word") vocab = Vocab(train_tokens, min_freq=5) train_features = torch.tensor([truncate_pad(vocab[line], num_steps, vocab["<pad>"]) for line in train_tokens]) test_features = torch.tensor([truncate_pad(vocab[line], num_steps, vocab["<pad>"]) for line in test_tokens]) train_iter = load_array((train_features, torch.tensor(train_data[1])), batch_size) test_iter = load_array((test_features, torch.tensor(test_data[1])), batch_size, is_train=False) return train_iter, test_iter, vocab DATA_HUB = dict() DATA_HUB["aclImdb"] = ( "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz", "01ada507287d82875905620988597833ad4e0903", ) data_dir = download_extract("aclImdb", "aclImdb") batch_size = 64 train_iter, test_iter, vocab = load_data_imdb(batch_size) """ Explanation: Putting it altogether. End of explanation """ class TextCNN(nn.Module): def __init__(self, vocab_size, embed_size, kernel_sizes, num_channels, **kwargs): super(TextCNN, self).__init__(**kwargs) self.embedding = nn.Embedding(vocab_size, embed_size) # The embedding layer does not participate in training self.constant_embedding = nn.Embedding(vocab_size, embed_size) self.dropout = nn.Dropout(0.5) self.decoder = nn.Linear(sum(num_channels), 2) # The max-over-time pooling layer has no weight, so it can share an # instance self.pool = nn.AdaptiveAvgPool1d(1) self.relu = nn.ReLU() # Create multiple one-dimensional convolutional layers self.convs = nn.ModuleList() for c, k in zip(num_channels, kernel_sizes): self.convs.append(nn.Conv1d(2 * embed_size, c, k)) def forward(self, inputs): # Concatenate the output of two embedding layers with shape of # (batch size, no. of words, word vector dimension) by word vector embeddings = torch.cat((self.embedding(inputs), self.constant_embedding(inputs)), dim=2) # According to the input format required by Conv1d, the word vector # dimension, that is, the channel dimension of the one-dimensional # convolutional layer, is transformed into the previous dimension embeddings = embeddings.permute(0, 2, 1) # For each one-dimensional convolutional layer, after max-over-time # pooling, a tensor with the shape of (batch size, channel size, 1) # can be obtained. Use the flatten function to remove the last # dimension and then concatenate on the channel dimension encoding = torch.cat( [torch.squeeze(self.relu(self.pool(conv(embeddings))), dim=-1) for conv in self.convs], dim=1 ) # After applying the dropout method, use a fully connected layer to # obtain the output outputs = self.decoder(self.dropout(encoding)) return outputs def try_all_gpus(): """Return all available GPUs, or [cpu(),] if no GPU exists.""" devices = [torch.device(f"cuda:{i}") for i in range(torch.cuda.device_count())] return devices if devices else [torch.device("cpu")] def try_gpu(i=0): """Return gpu(i) if exists, otherwise return cpu().""" if torch.cuda.device_count() >= i + 1: return torch.device(f"cuda:{i}") return torch.device("cpu") embed_size, kernel_sizes, nums_channels = 100, [3, 4, 5], [100, 100, 100] devices = try_all_gpus() net = TextCNN(len(vocab), embed_size, kernel_sizes, nums_channels) def init_weights(m): if type(m) in (nn.Linear, nn.Conv1d): nn.init.xavier_uniform_(m.weight) net.apply(init_weights); """ Explanation: Model We use 2 embedding layers, one with frozen weights, and one with learnable weights. We feed their concatenation to the 1d CNN. We then do average pooling over time before passing into the final MLP to map to the 2 output logits. End of explanation """ class TokenEmbedding: """Token Embedding.""" def __init__(self, embedding_name): self.idx_to_token, self.idx_to_vec = self._load_embedding(embedding_name) self.unknown_idx = 0 self.token_to_idx = {token: idx for idx, token in enumerate(self.idx_to_token)} def _load_embedding(self, embedding_name): idx_to_token, idx_to_vec = ["<unk>"], [] data_dir = download_extract(embedding_name) # GloVe website: https://nlp.stanford.edu/projects/glove/ # fastText website: https://fasttext.cc/ with open(os.path.join(data_dir, "vec.txt"), "r") as f: for line in f: elems = line.rstrip().split(" ") token, elems = elems[0], [float(elem) for elem in elems[1:]] # Skip header information, such as the top row in fastText if len(elems) > 1: idx_to_token.append(token) idx_to_vec.append(elems) idx_to_vec = [[0] * len(idx_to_vec[0])] + idx_to_vec return idx_to_token, torch.tensor(idx_to_vec) def __getitem__(self, tokens): indices = [self.token_to_idx.get(token, self.unknown_idx) for token in tokens] vecs = self.idx_to_vec[torch.tensor(indices)] return vecs def __len__(self): return len(self.idx_to_token) DATA_URL = "http://d2l-data.s3-accelerate.amazonaws.com/glove.6B.100d.zip" DATA_HUB["glove.6b.100d"] = (DATA_URL, "cd43bfb07e44e6f27cbcc7bc9ae3d80284fdaf5a") glove_embedding = TokenEmbedding("glove.6b.100d") embeds = glove_embedding[vocab.idx_to_token] net.embedding.weight.data.copy_(embeds) net.constant_embedding.weight.data.copy_(embeds) net.constant_embedding.weight.requires_grad = False """ Explanation: We load pretrained Glove vectors. We use these to initialize the embedding layers, one of which is frozen. End of explanation """ class Animator: """For plotting data in animation.""" def __init__( self, xlabel=None, ylabel=None, legend=None, xlim=None, ylim=None, xscale="linear", yscale="linear", fmts=("-", "m--", "g-.", "r:"), nrows=1, ncols=1, figsize=(3.5, 2.5), ): # Incrementally plot multiple lines if legend is None: legend = [] display.set_matplotlib_formats("svg") self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize) if nrows * ncols == 1: self.axes = [ self.axes, ] # Use a lambda function to capture arguments self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend) self.X, self.Y, self.fmts = None, None, fmts def add(self, x, y): # Add multiple data points into the figure if not hasattr(y, "__len__"): y = [y] n = len(y) if not hasattr(x, "__len__"): x = [x] * n if not self.X: self.X = [[] for _ in range(n)] if not self.Y: self.Y = [[] for _ in range(n)] for i, (a, b) in enumerate(zip(x, y)): if a is not None and b is not None: self.X[i].append(a) self.Y[i].append(b) self.axes[0].cla() for x, y, fmt in zip(self.X, self.Y, self.fmts): self.axes[0].plot(x, y, fmt) self.config_axes() display.display(self.fig) display.clear_output(wait=True) class Timer: """Record multiple running times.""" def __init__(self): self.times = [] self.start() def start(self): """Start the timer.""" self.tik = time.time() def stop(self): """Stop the timer and record the time in a list.""" self.times.append(time.time() - self.tik) return self.times[-1] def avg(self): """Return the average time.""" return sum(self.times) / len(self.times) def sum(self): """Return the sum of time.""" return sum(self.times) def cumsum(self): """Return the accumulated time.""" return np.array(self.times).cumsum().tolist() class Accumulator: """For accumulating sums over `n` variables.""" def __init__(self, n): self.data = [0.0] * n def add(self, *args): self.data = [a + float(b) for a, b in zip(self.data, args)] def reset(self): self.data = [0.0] * len(self.data) def __getitem__(self, idx): return self.data[idx] def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend): """Set the axes for matplotlib.""" axes.set_xlabel(xlabel) axes.set_ylabel(ylabel) axes.set_xscale(xscale) axes.set_yscale(yscale) axes.set_xlim(xlim) axes.set_ylim(ylim) if legend: axes.legend(legend) axes.grid() def accuracy(y_hat, y): """Compute the number of correct predictions.""" if len(y_hat.shape) > 1 and y_hat.shape[1] > 1: y_hat = torch.argmax(y_hat, axis=1) cmp_ = y_hat.type(y.dtype) == y return float(cmp_.type(y.dtype).sum()) def evaluate_accuracy_gpu(net, data_iter, device=None): """Compute the accuracy for a model on a dataset using a GPU.""" if isinstance(net, torch.nn.Module): net.eval() # Set the model to evaluation mode if not device: device = next(iter(net.parameters())).device # No. of correct predictions, no. of predictions metric = Accumulator(2) for X, y in data_iter: if isinstance(X, list): # Required for BERT Fine-tuning X = [x.to(device) for x in X] else: X = X.to(device) y = y.to(device) metric.add(accuracy(net(X), y), y.numel()) return metric[0] / metric[1] def train_batch(net, X, y, loss, trainer, devices): if isinstance(X, list): # Required for BERT Fine-tuning X = [x.to(devices[0]) for x in X] else: X = X.to(devices[0]) y = y.to(devices[0]) net.train() trainer.zero_grad() pred = net(X) l = loss(pred, y) l.sum().backward() trainer.step() train_loss_sum = l.sum() train_acc_sum = accuracy(pred, y) return train_loss_sum, train_acc_sum def train(net, train_iter, test_iter, loss, trainer, num_epochs, devices=try_all_gpus()): timer, num_batches = Timer(), len(train_iter) animator = Animator( xlabel="epoch", xlim=[1, num_epochs], ylim=[0, 1], legend=["train loss", "train acc", "test acc"] ) net = nn.DataParallel(net, device_ids=devices).to(devices[0]) for epoch in range(num_epochs): # Store training_loss, training_accuracy, num_examples, num_features metric = Accumulator(4) for i, (features, labels) in enumerate(train_iter): timer.start() l, acc = train_batch(net, features, labels, loss, trainer, devices) metric.add(l, acc, labels.shape[0], labels.numel()) timer.stop() if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1: animator.add(epoch + (i + 1) / num_batches, (metric[0] / metric[2], metric[1] / metric[3], None)) test_acc = evaluate_accuracy_gpu(net, test_iter) animator.add(epoch + 1, (None, None, test_acc)) print(f"loss {metric[0] / metric[2]:.3f}, train acc " f"{metric[1] / metric[3]:.3f}, test acc {test_acc:.3f}") print(f"{metric[2] * num_epochs / timer.sum():.1f} examples/sec on " f"{str(devices)}") """ Explanation: Training End of explanation """ lr, num_epochs = 0.001, 5 trainer = torch.optim.Adam(net.parameters(), lr=lr) loss = nn.CrossEntropyLoss(reduction="none") train(net, train_iter, test_iter, loss, trainer, num_epochs, devices) """ Explanation: Learning curve End of explanation """ def predict_sentiment(net, vocab, sentence): sentence = torch.tensor(vocab[sentence.split()], device=try_gpu()) label = torch.argmax(net(sentence.reshape(1, -1)), dim=1) return "positive" if label == 1 else "negative" predict_sentiment(net, vocab, "this movie is so great") predict_sentiment(net, vocab, "this movie is so bad") """ Explanation: Testing End of explanation """
joaoandre/algorithms
intro-python-data-science/week2.ipynb
mit
!pip freeze > requirements.txt import pandas as pd pd.Series? animals = ['Tiger', 'Bear', 'Moose'] pd.Series(animals) numbers = [1, 2, 3] pd.Series(numbers) animals = ['Tiger', 'Bear', None] pd.Series(animals) numbers = [1, 2, None] pd.Series(numbers) import numpy as np np.nan == None np.nan == np.nan np.isnan(np.nan) sports = {'Archery': 'Bhutan', 'Golf': 'Scotland', 'Sumo': 'Japan', 'Taekwondo': 'South Korea'} s = pd.Series(sports) s s.index s = pd.Series(['Tiger', 'Bear', 'Moose'], index=['India', 'America', 'Canada']) s sports = {'Archery': 'Bhutan', 'Golf': 'Scotland', 'Sumo': 'Japan', 'Taekwondo': 'South Korea'} s = pd.Series(sports, index=['Golf', 'Sumo', 'Hockey']) s """ Explanation: You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource. The Series Data Structure End of explanation """ sports = {'Archery': 'Bhutan', 'Golf': 'Scotland', 'Sumo': 'Japan', 'Taekwondo': 'South Korea'} s = pd.Series(sports) s s.iloc[3] s.loc['Golf'] s[3] s['Golf'] sports = {99: 'Bhutan', 100: 'Scotland', 101: 'Japan', 102: 'South Korea'} s = pd.Series(sports) s[0] #This won't call s.iloc[0] as one might expect, it generates an error instead s = pd.Series([100.00, 120.00, 101.00, 3.00]) s total = 0 for item in s: total+=item print(total) import numpy as np total = np.sum(s) print(total) #this creates a big series of random numbers s = pd.Series(np.random.randint(0,1000,10000)) s.head() len(s) %%timeit -n 100 summary = 0 for item in s: summary+=item %%timeit -n 100 summary = np.sum(s) s+=2 #adds two to each item in s using broadcasting s.head() for label, value in s.iteritems(): s.set_value(label, value+2) s.head() %%timeit -n 10 s = pd.Series(np.random.randint(0,1000,10000)) for label, value in s.iteritems(): s.loc[label]= value+2 %%timeit -n 10 s = pd.Series(np.random.randint(0,1000,10000)) s+=2 s = pd.Series([1, 2, 3]) s.loc['Animal'] = 'Bears' s original_sports = pd.Series({'Archery': 'Bhutan', 'Golf': 'Scotland', 'Sumo': 'Japan', 'Taekwondo': 'South Korea'}) cricket_loving_countries = pd.Series(['Australia', 'Barbados', 'Pakistan', 'England'], index=['Cricket', 'Cricket', 'Cricket', 'Cricket']) all_countries = original_sports.append(cricket_loving_countries) original_sports cricket_loving_countries all_countries all_countries.loc['Cricket'] """ Explanation: Querying a Series End of explanation """ import pandas as pd purchase_1 = pd.Series({'Name': 'Chris', 'Item Purchased': 'Dog Food', 'Cost': 22.50}) purchase_2 = pd.Series({'Name': 'Kevyn', 'Item Purchased': 'Kitty Litter', 'Cost': 2.50}) purchase_3 = pd.Series({'Name': 'Vinod', 'Item Purchased': 'Bird Seed', 'Cost': 5.00}) df = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store 1', 'Store 1', 'Store 2']) df.head() df.loc['Store 2'] type(df.loc['Store 2']) df.loc['Store 1'] df.loc['Store 1', 'Cost'] df.T df.T.loc['Cost'] df['Cost'] df.loc['Store 1']['Cost'] df.loc[:,['Name', 'Cost']] df.drop('Store 1') df copy_df = df.copy() copy_df = copy_df.drop('Store 1') copy_df copy_df.drop? del copy_df['Name'] copy_df df['Location'] = None df """ Explanation: The DataFrame Data Structure End of explanation """ costs = df['Cost'] costs costs+=2 costs df !cat olympics.csv df = pd.read_csv('olympics.csv') df.head() df = pd.read_csv('olympics.csv', index_col = 0, skiprows=1) df.head() df.columns for col in df.columns: if col[:2]=='01': df.rename(columns={col:'Gold' + col[4:]}, inplace=True) if col[:2]=='02': df.rename(columns={col:'Silver' + col[4:]}, inplace=True) if col[:2]=='03': df.rename(columns={col:'Bronze' + col[4:]}, inplace=True) if col[:1]=='№': df.rename(columns={col:'#' + col[1:]}, inplace=True) df.head() """ Explanation: Dataframe Indexing and Loading End of explanation """ df['Gold'] > 0 only_gold = df.where(df['Gold'] > 0) only_gold.head() only_gold['Gold'].count() df['Gold'].count() only_gold = only_gold.dropna() only_gold.head() only_gold = df[df['Gold'] > 0] only_gold.head() len(df[(df['Gold'] > 0) | (df['Gold.1'] > 0)]) df[(df['Gold.1'] > 0) & (df['Gold'] == 0)] """ Explanation: Querying a DataFrame End of explanation """ df.head() df['country'] = df.index df = df.set_index('Gold') df.head() df = df.reset_index() df.head() df = pd.read_csv('census.csv') df.head() df['SUMLEV'].unique() df=df[df['SUMLEV'] == 50] df.head() columns_to_keep = ['STNAME', 'CTYNAME', 'BIRTHS2010', 'BIRTHS2011', 'BIRTHS2012', 'BIRTHS2013', 'BIRTHS2014', 'BIRTHS2015', 'POPESTIMATE2010', 'POPESTIMATE2011', 'POPESTIMATE2012', 'POPESTIMATE2013', 'POPESTIMATE2014', 'POPESTIMATE2015'] df = df[columns_to_keep] df.head() df = df.set_index(['STNAME', 'CTYNAME']) df.head() df.loc['Michigan', 'Washtenaw County'] df.loc[ [('Michigan', 'Washtenaw County'), ('Michigan', 'Wayne County')] ] """ Explanation: Indexing Dataframes End of explanation """ df = pd.read_csv('log.csv') df df.fillna? df = df.set_index('time') df = df.sort_index() df df = df.reset_index() df = df.set_index(['time', 'user']) df df = df.fillna(method='ffill') df.head() """ Explanation: Missing values End of explanation """
arviz-devs/arviz
doc/source/user_guide/numpyro_refitting.ipynb
apache-2.0
import arviz as az import numpyro import numpyro.distributions as dist import jax.random as random from numpyro.infer import MCMC, NUTS import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats import xarray as xr numpyro.set_host_device_count(4) """ Explanation: Refitting NumPyro models with ArviZ ArviZ is backend agnostic and therefore does not sample directly. In order to take advantage of algorithms that require refitting models several times, ArviZ uses {class}~arviz.SamplingWrapper to convert the API of the sampling backend to a common set of functions. Hence, functions like Leave Future Out Cross Validation can be used in ArviZ independently of the sampling backend used. Below there is an example of SamplingWrapper usage for NumPyro. End of explanation """ np.random.seed(26) xdata = np.linspace(0, 50, 100) b0, b1, sigma = -2, 1, 3 ydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma) plt.plot(xdata, ydata) """ Explanation: For the example, we will use a linear regression model. End of explanation """ def model(N, x, y=None): b0 = numpyro.sample("b0", dist.Normal(0, 10)) b1 = numpyro.sample("b1", dist.Normal(0, 10)) sigma_e = numpyro.sample("sigma_e", dist.HalfNormal(10)) numpyro.sample("y", dist.Normal(b0 + b1 * x, sigma_e), obs=y) data_dict = { "N": len(ydata), "y": ydata, "x": xdata, } kernel = NUTS(model) sample_kwargs = dict( sampler=kernel, num_warmup=1000, num_samples=1000, num_chains=4, chain_method="parallel" ) mcmc = MCMC(**sample_kwargs) mcmc.run(random.PRNGKey(0), **data_dict) """ Explanation: Now we will write the NumPyro code: End of explanation """ dims = {"y": ["time"], "x": ["time"]} idata_kwargs = { "dims": dims, "constant_data": {"x": xdata} } idata = az.from_numpyro(mcmc, **idata_kwargs) idata """ Explanation: We have defined a dictionary sample_kwargs that will be passed to the SamplingWrapper in order to make sure that all refits use the same sampler parameters. We follow the same pattern with {func}az.from_numpyro &lt;arviz.from_numpyro&gt;. End of explanation """ class NumPyroSamplingWrapper(az.SamplingWrapper): def __init__(self, model, **kwargs): self.model_fun = model.sampler.model self.rng_key = kwargs.pop("rng_key", random.PRNGKey(0)) super(NumPyroSamplingWrapper, self).__init__(model, **kwargs) def log_likelihood__i(self, excluded_obs, idata__i): samples = { key: values.values.reshape((-1, *values.values.shape[2:])) for key, values in idata__i.posterior.items() } log_likelihood_dict = numpyro.infer.log_likelihood( self.model_fun, samples, **excluded_obs ) if len(log_likelihood_dict) > 1: raise ValueError("multiple likelihoods found") data = {} nchains = idata__i.posterior.dims["chain"] ndraws = idata__i.posterior.dims["draw"] for obs_name, log_like in log_likelihood_dict.items(): shape = (nchains, ndraws) + log_like.shape[1:] data[obs_name] = np.reshape(log_like.copy(), shape) return az.dict_to_dataset(data)[obs_name] def sample(self, modified_observed_data): self.rng_key, subkey = random.split(self.rng_key) mcmc = MCMC(**self.sample_kwargs) mcmc.run(subkey, **modified_observed_data) return mcmc def get_inference_data(self, fit): # Cloned from PyStanSamplingWrapper. idata = az.from_numpyro(mcmc, **self.idata_kwargs) return idata class LinRegWrapper(NumPyroSamplingWrapper): def sel_observations(self, idx): xdata = self.idata_orig.constant_data["x"].values ydata = self.idata_orig.observed_data["y"].values mask = np.isin(np.arange(len(xdata)), idx) data__i = {"x": xdata[~mask], "y": ydata[~mask], "N": len(ydata[~mask])} data_ex = {"x": xdata[mask], "y": ydata[mask], "N": len(ydata[mask])} return data__i, data_ex loo_orig = az.loo(idata, pointwise=True) loo_orig """ Explanation: We will create a subclass of az.SamplingWrapper. Therefore, instead of having to implement all functions required by {func}~arviz.reloo we only have to implement {func}~arviz.SamplingWrapper.sel_observations (we are cloning {func}~arviz.SamplingWrapper.sample and {func}~arviz.SamplingWrapper.get_inference_data from the {class}~arviz.PyStanSamplingWrapper in order to use {func}~xarray:xarray.apply_ufunc instead of assuming the log likelihood is calculated within Stan). Let's check the 2 outputs of sel_observations. 1. data__i is a dictionary because it is an argument of sample which will pass it as is to model.sampling. 2. data_ex is a list because it is an argument to log_likelihood__i which will pass it as *data_ex to apply_ufunc. More on data_ex and apply_ufunc integration is given below. End of explanation """ loo_orig.pareto_k[[13, 42, 56, 73]] = np.array([0.8, 1.2, 2.6, 0.9]) """ Explanation: In this case, the Leave-One-Out Cross Validation (LOO-CV) approximation using Pareto Smoothed Importance Sampling (PSIS) works for all observations, so we will use modify loo_orig in order to make az.reloo believe that PSIS failed for some observations. This will also serve as a validation of our wrapper, as the PSIS LOO-CV already returned the correct value. End of explanation """ numpyro_wrapper = LinRegWrapper( mcmc, rng_key=random.PRNGKey(5), idata_orig=idata, sample_kwargs=sample_kwargs, idata_kwargs=idata_kwargs ) """ Explanation: We initialize our sampling wrapper. Let's stop and analyze each of the arguments. We use idata_orig as a starting point, and mostly as a source of observed and constant data which is then subsetted in sel_observations. We also use model to get automatic log likelihood computation and we have the option to set the rng_key. Even if the data for each fit is different the rng_key is split with every fit. Finally, sample_kwargs and idata_kwargs are used to make sure all refits and corresponding InferenceData are generated with the same properties. End of explanation """ loo_relooed = az.reloo(numpyro_wrapper, loo_orig=loo_orig) loo_relooed loo_orig """ Explanation: And eventually, we can use this wrapper to call az.reloo, and compare the results with the PSIS LOO-CV results. End of explanation """
opengeostat/pygslib
pygslib/Ipython_templates/probplt_html.ipynb
mit
#general imports import pygslib """ Explanation: PyGSLIB PPplot End of explanation """ #get the data in gslib format into a pandas Dataframe mydata= pygslib.gslib.read_gslib_file('../data/cluster.dat') true= pygslib.gslib.read_gslib_file('../data/true.dat') true['Declustering Weight'] = 1 """ Explanation: Getting the data ready for work If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame. End of explanation """ parameters_probplt = { # gslib parameters for histogram calculation 'iwt' : 0, # input boolean (Optional: set True). Use weight variable? 'va' : mydata['Primary'], # input rank-1 array('d') with bounds (nd). Variable 'wt' : mydata['Declustering Weight'], # input rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight. # visual parameters for figure (if a new figure is created) 'figure' : None, # a bokeh figure object (Optional: new figure created if None). Set none or undefined if creating a new figure. 'title' : 'Prob blot', # string (Optional, "Histogram"). Figure title 'xlabel' : 'Primary', # string (Optional, default "Z"). X axis label 'ylabel' : 'P[Z<c]', # string (Optional, default "f(%)"). Y axis label 'xlog' : 1, # boolean (Optional, default True). If true plot X axis in log sale. 'ylog' : 1, # boolean (Optional, default True). If true plot Y axis in log sale. # visual parameter for the probplt 'style' : 'circle', # string with valid bokeh chart type 'color' : 'blue', # string with valid CSS colour (https://www.w3schools.com/colors/colors_names.asp), or an RGB(A) hex value, or tuple of integers (r,g,b), or tuple of (r,g,b,a) (Optional, default "navy") 'legend': 'Non declustered', # string (Optional, default "NA"). 'alpha' : 1, # float [0-1] (Optional, default 0.5). Transparency of the fill colour 'lwidth': 0, # float (Optional, default 1). Line width # leyend 'legendloc': 'bottom_right'} # float (Optional, default 'top_right'). Any of top_left, top_center, top_right, center_right, bottom_right, bottom_center, bottom_left, center_left or center parameters_probplt_dcl = parameters_probplt.copy() parameters_probplt_dcl['iwt']=1 parameters_probplt_dcl['legend']='Declustered' parameters_probplt_dcl['color'] = 'red' parameters_probplt_true = parameters_probplt.copy() parameters_probplt_true['va'] = true['Primary'] parameters_probplt_true['wt'] = true['Declustering Weight'] parameters_probplt_true['iwt']=0 parameters_probplt_true['legend']='True' parameters_probplt_true['color'] = 'black' parameters_probplt_true['style'] = 'line' parameters_probplt_true['lwidth'] = 1 results, fig = pygslib.plothtml.probplt(parameters_probplt) # add declustered to the plot parameters_probplt_dcl['figure']= fig results, fig = pygslib.plothtml.probplt(parameters_probplt_dcl) # add true CDF to the plot parameters_probplt_true['figure']=parameters_probplt_dcl['figure'] results, fig = pygslib.plothtml.probplt(parameters_probplt_true) # show the plot pygslib.plothtml.show(fig) """ Explanation: gslib probplot with bokeh End of explanation """
tcstewar/testing_notebooks
Converting non-spiking neurons to spiking neurons.ipynb
gpl-2.0
class LeakyIntegrator: def __init__(self, threshold, tau_rc=20): self.threshold = threshold self.tau_rc = tau_rc self.v = 0 def step(self, J): if self.v > self.threshold: output = self.v - self.threshold else: output = 0 dv = -self.v + output + J self.v += dv / self.tau_rc return output """ Explanation: This notebook goes through an example of a general technique for converting a non-spiking neuron model into a spiking neuron model. This technique is from https://arxiv.org/abs/2002.03553 We start with a particular neuron model we want to convert. This could be anything at all, but here we're going to use a leaky integrator model defined like this: End of explanation """ steps = 500 # use Nengo to generate a random input signal stim = nengo.processes.WhiteSignal(high=10, period=2, rms=0.3, seed=2).run_steps(steps, dt=0.001) # make a neuron neuron = LeakyIntegrator(threshold=0.2, tau_rc=20) # run the simulation output_nonspiking = np.zeros(steps) for i in range(steps): output_nonspiking[i] = neuron.step(stim[i]) plt.figure(figsize=(14,4)) plt.plot(stim, label='stimulus (J)') plt.plot(output_nonspiking, label='output') plt.xlabel('time steps') plt.legend() plt.show() """ Explanation: Now let's run the model with some random input just to get some basic behaviour out of the model End of explanation """ class LeakyIntegratorSpiking: def __init__(self, threshold, tau_rc=20): self.threshold = threshold self.tau_rc = tau_rc self.v = 0 self.output_accumulator = 0 def step(self, J): if self.v > self.threshold: output = self.v - self.threshold else: output = 0 dv = -self.v + output + J self.v += dv / self.tau_rc self.output_accumulator += output output = int(self.output_accumulator) self.output_accumulator -= output return output output = np.zeros(steps) neuron = LeakyIntegratorSpiking(threshold=0.2, tau_rc=20) for i in range(steps): output[i] = neuron.step(stim[i]) plt.figure(figsize=(14,4)) plt.plot(stim, label='stimulus (J)') plt.plot(output_nonspiking, label='output (non-spiking)') plt.plot(output, label='output (spiking)') plt.xlabel('time steps') plt.legend() plt.show() """ Explanation: So that's the basic behaviour we want to emulate with a spiking neuron. The general approach is just to round the output! We take whatever the neuron model is doing already, but we take the output and add it into an accumulator. Then we check if the accumulator is above 1. If it is, we emit a spike and subtract 1 from the accumulator. If it isn't we don't emit a spike. Here's the result: End of explanation """ class LeakyIntegratorSpiking: def __init__(self, threshold, tau_rc=20, spike_height=1.0): self.threshold = threshold self.tau_rc = tau_rc self.v = 0 self.output_accumulator = 0 self.spike_height = spike_height def step(self, J): if self.v > self.threshold: output = self.v - self.threshold else: output = 0 dv = -self.v + output + J self.v += dv / self.tau_rc self.output_accumulator += output / self.spike_height output = int(self.output_accumulator) self.output_accumulator -= output return output * self.spike_height output = np.zeros(steps) neuron = LeakyIntegratorSpiking(threshold=0.2, tau_rc=20, spike_height=2) for i in range(steps): output[i] = neuron.step(stim[i]) plt.figure(figsize=(14,4)) plt.plot(stim, label='stimulus (J)') plt.plot(output_nonspiking, label='output (non-spiking)') plt.plot(output, label='output (spiking)') plt.xlabel('time steps') plt.legend() plt.title(f'spike_height={neuron.spike_height}') plt.show() """ Explanation: So there's a spiking neuron! It only outputs 0s and 1s, and it approximates the behaviour of the non-spiking neuron. However, there is something a bit odd there. During that output peak around time step 380, the neuron is spiking every time step. It would be nice to introduce a parameter that would let us control that. Right now, if the non-spiking output is a 1, that corresponds to 1 spike every time-step. So, to control this, let's introduce a parameter that we'll call spike_height. This tells us what effective value does one spike correspond to. So then we could do something like set spike_height=2, and that would mean that at an output value of 2 it'd spike every time-step, but at an output value of 1 it'd spike every other time step: End of explanation """ neuron = LeakyIntegratorSpiking(threshold=0.2, tau_rc=20, spike_height=5) for i in range(steps): output[i] = neuron.step(stim[i]) plt.figure(figsize=(14,4)) plt.plot(stim, label='stimulus (J)') plt.plot(output_nonspiking, label='output (non-spiking)') plt.plot(output, label='output (spiking)') plt.xlabel('time steps') plt.legend() plt.title(f'spike_height={neuron.spike_height}') plt.show() """ Explanation: We could even slow the spiking down even farther: End of explanation """ neuron = LeakyIntegratorSpiking(threshold=0.2, tau_rc=20, spike_height=0.5) for i in range(steps): output[i] = neuron.step(stim[i]) plt.figure(figsize=(14,4)) plt.plot(stim, label='stimulus (J)') plt.plot(output_nonspiking, label='output (non-spiking)') plt.plot(output, label='output (spiking)') plt.xlabel('time steps') plt.legend() plt.title(f'spike_height={neuron.spike_height}') plt.show() """ Explanation: So that's a nice simple way of making any neuron spiking. But, there's also one other very cool side effect of this approach. What if I start decreasing spike_height? End of explanation """ neuron = LeakyIntegratorSpiking(threshold=0.2, tau_rc=20, spike_height=0.3) for i in range(steps): output[i] = neuron.step(stim[i]) plt.figure(figsize=(14,4)) plt.plot(stim, label='stimulus (J)') plt.plot(output_nonspiking, label='output (non-spiking)') plt.plot(output, label='output (spiking)') plt.xlabel('time steps') plt.legend() plt.title(f'spike_height={neuron.spike_height}') plt.show() """ Explanation: Yikes! What happened there? Now when the peak happens, it's actually emitting 2 spikes per time step! This is because the accumulator has a value in it that is twice the threshold, and they way I implemented the threshold detection to emit a spike was just like this: self.output_accumulator += output / self.spike_height output = int(self.output_accumulator) self.output_accumulator -= output That casting to an integer is a nice shortcut, and if our values are small enough, then output will only ever be 0 or 1. But, if the output is supposed to be large enough that we should output 2 spikes on this same timestep, then this algorithm will go ahead and do so. (Interestingly, if you do this trick to a neuron that can output negative numbers, then this algorithm will also go ahead and produce negative spikes!) So what happens if we reduce this value even further? End of explanation """ neuron = LeakyIntegratorSpiking(threshold=0.2, tau_rc=20, spike_height=0.1) for i in range(steps): output[i] = neuron.step(stim[i]) plt.figure(figsize=(14,4)) plt.plot(stim, label='stimulus (J)') plt.plot(output_nonspiking, label='output (non-spiking)') plt.plot(output, label='output (spiking)') plt.xlabel('time steps') plt.legend() plt.title(f'spike_height={neuron.spike_height}') plt.show() """ Explanation: And further? End of explanation """ neuron = LeakyIntegratorSpiking(threshold=0.2, tau_rc=20, spike_height=0.01) for i in range(steps): output[i] = neuron.step(stim[i]) plt.figure(figsize=(14,4)) plt.plot(stim, label='stimulus (J)') plt.plot(output_nonspiking, label='output (non-spiking)') plt.plot(output, label='output (spiking)') plt.xlabel('time steps') plt.legend() plt.title(f'spike_height={neuron.spike_height}') plt.show() """ Explanation: And even further? End of explanation """ import nengo_ocl class LeakyIntegrator(nengo.LIF): threshold = nengo.params.NumberParam('threshold') def __init__(self, threshold=0, tau_rc=0.02, spike_height=1): super().__init__(tau_rc=tau_rc, tau_ref=0) self.threshold = threshold self.spike_height = spike_height def step(self, dt, J, output, voltage, refractory_time): accumulator = refractory_time # to map to the paper: output is y, voltage is u, and J is the inputs times the dictionary matrix # y = T(u) output[:] = np.where(voltage>self.threshold, voltage-self.threshold, 0) # du/dt = -u + y + phi*r dv = -voltage + output + J # perform the voltage update voltage += dv*(dt/self.tau_rc) accumulator += output / self.spike_height output[:] = np.fix(accumulator) accumulator -= output output *= self.spike_height from nengo_ocl.utils import as_ascii from mako.template import Template from nengo_ocl.clra_nonlinearities import _plan_template def plan_leaky_integrator(queue, dt, J, V, outR, thresh, inv_tau, acc, spike_height, **kwargs): assert J.ctype == 'float' for x in [V, outR]: assert x.ctype == J.ctype inputs = dict(J=J, V=V, acc=acc) outputs = dict(outV=V, outAcc=acc, outR=outR) parameters = dict(inv_tau=inv_tau, thresh=thresh, spike_height=spike_height) textconf = dict(type=J.ctype, dt=dt) decs = """ const ${type} dt = ${dt}; ${type} n_spikes; """ text = """ if (V>thresh) { acc += (V-thresh) / spike_height; n_spikes = floor(acc); acc -= n_spikes; V += (-thresh+J)*(dt * inv_tau); outR = n_spikes * spike_height; } else { V += (-V+J)*(dt * inv_tau); outR = 0; } outV = V; outAcc = acc; """ decs = as_ascii(Template(decs, output_encoding='ascii').render(**textconf)) text = as_ascii(Template(text, output_encoding='ascii').render(**textconf)) cl_name = "cl_leaky_integrator" return _plan_template( queue, cl_name, text, declares=decs, inputs=inputs, outputs=outputs, parameters=parameters, **kwargs) class OpenCLSimulator(nengo_ocl.Simulator): def _plan_LeakyIntegrator(self, ops): dt = self.model.dt J = self.all_data[[self.sidx[op.J] for op in ops]] V = self.all_data[[self.sidx[op.state["voltage"]] for op in ops]] R = self.all_data[[self.sidx[op.output] for op in ops]] thresh = self.RaggedArray([op.neurons.threshold * np.ones(op.J.size) for op in ops], dtype=J.dtype) inv_tau = self.RaggedArray([(1/op.neurons.tau_rc) * np.ones(op.J.size) for op in ops], dtype=J.dtype) acc = self.RaggedArray([np.zeros(op.J.size) for op in ops], dtype=J.dtype) sh = self.RaggedArray([op.neurons.spike_height * np.ones(op.J.size) for op in ops], dtype=J.dtype) return [plan_leaky_integrator(self.queue, dt, J, V, R, thresh, inv_tau, acc, sh)] model = nengo.Network() with model: stim = nengo.Node(nengo.processes.WhiteSignal(high=10, period=2, rms=0.3, seed=2)) ens = nengo.Ensemble(n_neurons=1, dimensions=1, encoders=nengo.dists.Choice([[1]]), gain=nengo.dists.Choice([1]), bias=nengo.dists.Choice([0]), neuron_type=LeakyIntegrator(tau_rc=0.02, spike_height=0.1, threshold=0.2)) nengo.Connection(stim, ens.neurons, transform=np.ones((ens.n_neurons, 1))) p = nengo.Probe(ens.neurons) #sim = OpenCLSimulator(model) sim = nengo.Simulator(model) with sim: sim.run(0.5) plt.plot(sim.trange(), sim.data[p][:,0]) class SpikingLinear(nengo.RectifiedLinear): state = {"accumulator": nengo.dists.Uniform(low=0, high=1)} spiking = True spike_height = nengo.params.NumberParam('spike_height') def __init__(self, spike_height=1): super().__init__() self.spike_height = spike_height def step(self, dt, J, output, accumulator): accumulator += J / self.spike_height output[:] = np.fix(accumulator) accumulator -= output output *= self.spike_height from nengo_ocl.utils import as_ascii from mako.template import Template from nengo_ocl.clra_nonlinearities import _plan_template def plan_spiking_linear(queue, dt, J, outR, acc, spike_height, **kwargs): assert J.ctype == 'float' for x in [outR]: assert x.ctype == J.ctype inputs = dict(J=J, acc=acc) outputs = dict(outAcc=acc, outR=outR) parameters = dict(spike_height=spike_height) textconf = dict(type=J.ctype, dt=dt) decs = """ const ${type} dt = ${dt}; ${type} n_spikes; """ text = """ acc += J / spike_height; n_spikes = trunc(acc); acc -= n_spikes; outR = n_spikes * spike_height; outAcc = acc; """ decs = as_ascii(Template(decs, output_encoding='ascii').render(**textconf)) text = as_ascii(Template(text, output_encoding='ascii').render(**textconf)) cl_name = "cl_spiking_linear" return _plan_template( queue, cl_name, text, declares=decs, inputs=inputs, outputs=outputs, parameters=parameters, **kwargs) class OpenCLSimulator(nengo_ocl.Simulator): def _plan_SpikingLinear(self, ops): dt = self.model.dt J = self.all_data[[self.sidx[op.J] for op in ops]] acc = self.all_data[[self.sidx[op.state["accumulator"]] for op in ops]] R = self.all_data[[self.sidx[op.output] for op in ops]] sh = self.RaggedArray([op.neurons.spike_height * np.ones(op.J.size) for op in ops], dtype=J.dtype) return [plan_spiking_linear(self.queue, dt, J, R, acc, sh)] model = nengo.Network() with model: stim = nengo.Node(nengo.processes.WhiteSignal(high=10, period=2, rms=0.3, seed=2)) ens = nengo.Ensemble(n_neurons=1, dimensions=1, encoders=nengo.dists.Choice([[1]]), gain=nengo.dists.Choice([1]), bias=nengo.dists.Choice([0]), neuron_type=SpikingLinear(spike_height=0.1)) nengo.Connection(stim, ens.neurons, transform=np.ones((ens.n_neurons, 1))) p = nengo.Probe(ens.neurons) sim = OpenCLSimulator(model) #sim = nengo.Simulator(model) with sim: sim.run(0.5) plt.plot(sim.trange(), sim.data[p]) """ Explanation: If we reduce the spike_height far enough, we get back our original non-spiking neuron model! What's happening here is that the spike_height parameter is letting us control the level of descritization that we are using. In the paper where this was introduced (https://arxiv.org/abs/2002.03553), this was used to help do backprop training on spiking neural networks. You can start the training on the non-spiking version, and then gradually increase the spike_height parameter ($1/\omega$ in the paper) over time as training occurs. Interestingly, you can also think of multiple-spikes-per-timestep as a single spike, but with a magnitude. Some neuromorphic chips do support this, including SpiNNaker and Loihi 2: https://www.embedded.com/intel-offers-loihi-2-neuromorphic-chip-and-software-framework/ ("In Loihi 2, spikes carry a configurable integer payload available to the programmable neuron model.") Nengo and NengoOpenCL version End of explanation """
dfm/dfm.io
static/downloads/notebooks/emcee-pymc3.ipynb
mit
%matplotlib inline %config InlineBackend.figure_format = "retina" from matplotlib import rcParams rcParams["savefig.dpi"] = 100 rcParams["figure.dpi"] = 100 rcParams["font.size"] = 20 """ Explanation: Title: emcee + PyMC3 Date: 2018-08-21 Category: Data Analysis Slug: emcee-pymc3 Summary: sampling models defined in PyMC3 using emcee Math: true End of explanation """ import numpy as np import matplotlib.pyplot as plt np.random.seed(42) true_params = np.array([0.5, -2.3, -0.23]) N = 50 t = np.linspace(0, 10, 2) x = np.random.uniform(0, 10, 50) y = x * true_params[0] + true_params[1] y_obs = y + np.exp(true_params[-1]) * np.random.randn(N) plt.plot(x, y_obs, ".k", label="observations") plt.plot(t, true_params[0]*t + true_params[1], label="truth") plt.xlabel("x") plt.ylabel("y") plt.legend(fontsize=14); """ Explanation: In this post, I will demonstrate how you can use emcee to sample models defined using PyMC3. Thomas Wiecki wrote about how to do this this with an earlier version of PyMC, but I needed an update since I wanted to do a comparison and PyMC's interface has changed a lot since he wrote his post. This isn't necessarily something that you'll want to do (and I definitely don't recommend it in general), but I figured that I would post it here for posterity. For simplicity, let's use the simulated data from my previous blog post: End of explanation """ import pymc3 as pm import theano.tensor as tt with pm.Model() as model: logs = pm.Uniform("logs", lower=-10, upper=10) alphaperp = pm.Uniform("alphaperp", lower=-10, upper=10) theta = pm.Uniform("theta", -2*np.pi, 2*np.pi, testval=0.0) # alpha_perp = alpha * cos(theta) alpha = pm.Deterministic("alpha", alphaperp / tt.cos(theta)) # beta = tan(theta) beta = pm.Deterministic("beta", tt.tan(theta)) # The observation model mu = alpha * x + beta pm.Normal("obs", mu=mu, sd=tt.exp(logs), observed=y_obs) trace = pm.sample(draws=2000, tune=2000) """ Explanation: Then, we can code up the model in PyMC3 following Jake VanderPlas' notation, and sample it using PyMC3's NUTS[sic] sampler: End of explanation """ import corner samples = np.vstack([trace[k] for k in ["alpha", "beta", "logs"]]).T corner.corner(samples, truths=true_params); """ Explanation: And we can take a look at the corner plot: End of explanation """ import theano with model: f = theano.function(model.vars, [model.logpt] + model.deterministics) def log_prob_func(params): dct = model.bijection.rmap(params) args = (dct[k.name] for k in model.vars) results = f(*args) return tuple(results) """ Explanation: Sampling the PyMC3 model using emcee To sample this using emcee, we'll need to do a little bit of bookkeeping. I've coded this up using version 3 of emcee that is currently available as the master branch on GitHub or as a pre-release on PyPI, so you'll need to install that version to run this. To sample from this model, we need to expose the Theano method for evaluating the log probability to Python. There is a version of this built into PyMC3, but I also want to return the values of all the deterministic variables using the "blobs" feature in emcee so the function is slightly more complicated. End of explanation """ import emcee with model: # First we work out the shapes of all of the deterministic variables res = pm.find_MAP() vec = model.bijection.map(res) initial_blobs = log_prob_func(vec)[1:] dtype = [(var.name, float, np.shape(b)) for var, b in zip(model.deterministics, initial_blobs)] # Then sample as usual coords = vec + 1e-5 * np.random.randn(25, len(vec)) nwalkers, ndim = coords.shape sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob_func, blobs_dtype=dtype) sampler.run_mcmc(coords, 5000, progress=True) """ Explanation: And now we can run the sampler: End of explanation """ import pandas as pd df = pd.DataFrame.from_records(sampler.get_blobs(flat=True, discard=100, thin=30)) corner.corner(df[["alpha", "beta", "logs"]], truths=true_params); """ Explanation: And we can use this to make the same corner plot as above: End of explanation """ [float(emcee.autocorr.integrated_time(np.array(trace.get_values(var.name, combine=False)).T)) for var in model.free_RVs] """ Explanation: The last thing that we might want to look at is the integrated autocorrelation time for each method. First, as expected, the autocorrelation for PyMC3 is very short (about 1 step): End of explanation """ sampler.get_autocorr_time(discard=100) """ Explanation: And, the autocorrelation for emcee is about 40 steps: End of explanation """
santanche/java2learn
notebooks/pt/c04components/s03message-bus/1.iot-devices.ipynb
gpl-2.0
publisher = IoT_mqtt_publisher("localhost", 1883) """ Explanation: Instanciando Componente de Publicação de Mensagens no MQTT End of explanation """ sensor_1 = IoT_sensor("1", "temperature", "°C", 20, 26, 2) sensor_2 = IoT_sensor("2", "umidade", "%", 50, 60, 3) sensor_3 = IoT_sensor("3", "temperature", "°C", 28, 30, 4) sensor_4 = IoT_sensor("4", "umidade", "%", 40, 55, 5) """ Explanation: Componente para simulação de um sensor bash IoT_sensor(&lt;name/id&gt;, &lt;grandeza física &gt;, &lt;unidade de medida&gt;, &lt;menor valor&gt;, &lt;maior valor possível&gt;, &lt;intervalo entre leituras (segundos)&gt;) Exemplo de sensor de pressão: ```python sensor_pressao = IoT_sensor("32", "pressao", "bar", 20, 35, 5) ``` Componentes IoT_sensor podem se conectar a componentes do tipo IoT_mqtt_publisher para publicar, em um tópico, mensagens referentes às leituras feitas pelo sensor. Por exemplo, o sensor do exemplo acima produziu a seguinte mensagem no tópico sensor/32/pressao: python { "source": "sensor", "name": "32", "type": "reading", "body": { "timestamp": "2019-08-17 17:02:15", "dimension": "pressao", "value": 25.533895448246717, "unity": "bar" } } Instanciando Sensores End of explanation """ sensor_1.connect(publisher) sensor_2.connect(publisher) sensor_3.connect(publisher) sensor_4.connect(publisher) """ Explanation: Conectando os Componentes End of explanation """
4DGenome/Chromosomal-Conformation-Course
Notebooks/01-Mapping.ipynb
gpl-3.0
from pytadbit.mapping.full_mapper import full_mapping """ Explanation: Table of Contents Iterative vs fragment-based mapping Advantages of iterative mapping Advantages of fragment-based mapping Mapping Iterative mapping Fragment-based mapping Iterative vs fragment-based mapping Iterative mapping first proposed by <a name="ref-1"/>(Imakaev et al., 2012), allows to map usually a high number of reads. However other methodologies, less "brute-force" can be used to take into account the chimeric nature of Hi-C reads. A simple alternative is to allow split mapping, just as with RNA-seq data. Another way consists in pre-truncating <a name="ref-1"/>(Ay and Noble, 2015) reads that contain a ligation site and map only the longest part of the read <a name="ref-2"/>(Wingett et al., 2015). Finally, an intermediate approach, fragment-based, consists in mapping full length reads first, and than splitting unmapped reads at the ligation sites <a name="ref-1"/>(Serra, Ba`{u, Filion and Marti-Renom, 2016). Advantages of iterative mapping It's the only solution when no restriction enzyme has been used (i.e. micro-C) Can be faster when few windows (2 or 3) are used Advantages of fragment-based mapping Generally faster Safer: mapped reads are generally larger than 25-30 nm (the largest window used in iterative mapping). Less reads are mapped, but the difference is usually canceled or reversed when looking for "valid-pairs". Note: We use GEM <a name="ref-1"/>(Marco-Sola, Sammeth, Guig\'{o and Ribeca, 2012), performance are very similar to Bowtie2, perhaps a bit better. For now TADbit is only compatible with GEM. Mapping End of explanation """ r_enz = 'MboI' ! mkdir -p results/iterativ/$r_enz ! mkdir -p results/iterativ/$r_enz/01_mapping # for the first side of the reads full_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem', out_map_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r1/'.format(r_enz), fastq_path='/media/storage/FASTQs/K562_%s_1.fastq' % (r_enz), r_enz='hindIII', frag_map=False, clean=True, nthreads=20, windows=((1,25),(1,30),(1,35),(1,40),(1,45),(1,50),(1,55),(1,60),(1,65),(1,70),(1,75)), temp_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r1_tmp/'.format(r_enz)) """ Explanation: The full mapping function can be used to perform either iterative or fragment-based mapping, or a combination of both. Iterative mapping Here an example of use as iterative mapping: End of explanation """ # for the second side of the reads full_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem', out_map_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r2/'.format(r_enz), fastq_path='/media/storage/FASTQs/K562_%s_2.fastq' % (r_enz), r_enz=r_enz, frag_map=False, clean=True, nthreads=20, windows=((1,25),(1,30),(1,35),(1,40),(1,45),(1,50),(1,55),(1,60),(1,65),(1,70),(1,75)), temp_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r2_tmp/'.format(r_enz)) """ Explanation: And for the second side of the read: End of explanation """ ! mkdir -p results/fragment/$r_enz ! mkdir -p results/fragment/$r_enz/01_mapping # for the first side of the reads full_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem', out_map_dir='results/fragment/{0}/01_mapping/mapped_{0}_r1/'.format(r_enz), fastq_path='/media/storage/FASTQs/K562_%s_1.fastq' % (r_enz), r_enz=r_enz, frag_map=True, clean=True, nthreads=20, temp_dir='results/fragment/{0}/01_mapping/mapped_{0}_r1_tmp/'.format(r_enz)) # for the second side of the reads full_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem', out_map_dir='results/fragment/{0}/01_mapping/mapped_{0}_r2/'.format(r_enz), fastq_path='/media/storage/FASTQs/K562_%s_2.fastq' % (r_enz), r_enz=r_enz, frag_map=True, clean=True, nthreads=20, temp_dir='results/fragment/{0}/01_mapping/mapped_{0}_r2_tmp/'.format(r_enz)) """ Explanation: Fragment-based mapping With fragment based mapping it would be: End of explanation """
mne-tools/mne-tools.github.io
0.12/_downloads/plot_decoding_csp_space.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Romain Trachel <romain.trachel@inria.fr> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne import io from mne.datasets import sample print(__doc__) data_path = sample.data_path() """ Explanation: ==================================================================== Decoding in sensor space data using the Common Spatial Pattern (CSP) ==================================================================== Decoding applied to MEG data in sensor space decomposed using CSP. Here the classifier is applied to features extracted on CSP filtered signals. See http://en.wikipedia.org/wiki/Common_spatial_pattern and [1] [1] Zoltan J. Koles. The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG. Electroencephalography and Clinical Neurophysiology, 79(6):440--447, December 1991. End of explanation """ raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.2, 0.5 event_id = dict(aud_l=1, vis_l=3) # Setup for reading the raw data raw = io.read_raw_fif(raw_fname, preload=True) raw.filter(2, None, method='iir') # replace baselining with high-pass events = mne.read_events(event_fname) raw.info['bads'] = ['MEG 2443'] # set bad channels picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=False, exclude='bads') # Read epochs epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=None, preload=True) labels = epochs.events[:, -1] evoked = epochs.average() """ Explanation: Set parameters and read data End of explanation """ from sklearn.svm import SVC # noqa from sklearn.cross_validation import ShuffleSplit # noqa from mne.decoding import CSP # noqa n_components = 3 # pick some components svc = SVC(C=1, kernel='linear') csp = CSP(n_components=n_components) # Define a monte-carlo cross-validation generator (reduce variance): cv = ShuffleSplit(len(labels), 10, test_size=0.2, random_state=42) scores = [] epochs_data = epochs.get_data() for train_idx, test_idx in cv: y_train, y_test = labels[train_idx], labels[test_idx] X_train = csp.fit_transform(epochs_data[train_idx], y_train) X_test = csp.transform(epochs_data[test_idx]) # fit classifier svc.fit(X_train, y_train) scores.append(svc.score(X_test, y_test)) # Printing the results class_balance = np.mean(labels == labels[0]) class_balance = max(class_balance, 1. - class_balance) print("Classification accuracy: %f / Chance level: %f" % (np.mean(scores), class_balance)) # Or use much more convenient scikit-learn cross_val_score function using # a Pipeline from sklearn.pipeline import Pipeline # noqa from sklearn.cross_validation import cross_val_score # noqa cv = ShuffleSplit(len(labels), 10, test_size=0.2, random_state=42) clf = Pipeline([('CSP', csp), ('SVC', svc)]) scores = cross_val_score(clf, epochs_data, labels, cv=cv, n_jobs=1) print(scores.mean()) # should match results above # And using reuglarized csp with Ledoit-Wolf estimator csp = CSP(n_components=n_components, reg='ledoit_wolf') clf = Pipeline([('CSP', csp), ('SVC', svc)]) scores = cross_val_score(clf, epochs_data, labels, cv=cv, n_jobs=1) print(scores.mean()) # should get better results than above # plot CSP patterns estimated on full data for visualization csp.fit_transform(epochs_data, labels) data = csp.patterns_ fig, axes = plt.subplots(1, 4) for idx in range(4): mne.viz.plot_topomap(data[idx], evoked.info, axes=axes[idx], show=False) fig.suptitle('CSP patterns') fig.tight_layout() fig.show() """ Explanation: Decoding in sensor space using a linear SVM End of explanation """
dariox2/CADL
session-0/session-0.ipynb
apache-2.0
4*2 import numpy as np print(np.sin(.5)) print(np.random.random(3)) """ Explanation: Session 0: Preliminaries with Python/Notebook <p class="lead"> Parag K. Mital<br /> <a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning w/ Tensorflow</a><br /> <a href="https://www.kadenze.com/partners/kadenze-academy">Kadenze Academy</a><br /> <a href="https://twitter.com/hashtag/CADL">#CADL</a> </p> This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. <a name="learning-goals"></a> Learning Goals Install and run Jupyter Notebook with the Tensorflow library Learn to create a dataset of images using os.listdir and plt.imread Understand how images are represented when using float or uint8 Learn how to crop and resize images to a standard size. Table of Contents <!-- MarkdownTOC autolink=true autoanchor=true bracket=round --> Introduction Using Notebook Cells Kernel Importing Libraries Loading Data Structuring data as folders Using the os library to get data Loading an image RGB Image Representation Understanding data types and ranges (uint8, float32) Visualizing your data as images Image Manipulation Cropping images Resizing images Cropping/Resizing Images The Batch Dimension Conclusion <!-- /MarkdownTOC --> <a name="introduction"></a> Introduction This preliminary session will cover the basics of working with image data in Python, and creating an image dataset. Please make sure you are running at least Python 3.4 and have Tensorflow 0.9.0 or higher installed. If you are unsure of how to do this, please make sure you have followed the installation instructions. We'll also cover loading images from a directory, resizing and cropping images, and changing an image datatype from unsigned int to float32. If you feel comfortable with all of this, please feel free to skip straight to Session 1. Otherwise, launch jupyter notebook and make sure you are reading the session-0.ipynb file. <a name="using-notebook"></a> Using Notebook Make sure you have launched jupyter notebook and are reading the session-0.ipynb file. If you are unsure of how to do this, please make sure you follow the installation instructions. This will allow you to interact with the contents and run the code using an interactive python kernel! <a name="cells"></a> Cells After launching this notebook, try running/executing the next cell by pressing shift-enter on it. End of explanation """ import os """ Explanation: Now press 'a' or 'b' to create new cells. You can also use the toolbar to create new cells. You can also use the arrow keys to move up and down. <a name="kernel"></a> Kernel Note the numbers on each of the cells inside the brackets, after "running" the cell. These denote the current execution count of your python "kernel". Think of the kernel as another machine within your computer that understands Python and interprets what you write as code into executions that the processor can understand. <a name="importing-libraries"></a> Importing Libraries When you launch a new notebook, your kernel is a blank state. It only knows standard python syntax. Everything else is contained in additional python libraries that you have to explicitly "import" like so: End of explanation """ # Load the os library import os # Load the request module import urllib.request # Create a directory os.mkdir('img_align_celeba') # Now perform the following 10 times: for img_i in range(1, 11): # create a string using the current loop counter f = '000%03d.jpg' % img_i # and get the url with that string appended the end url = 'https://s3.amazonaws.com/cadl/celeb-align/' + f # We'll print this out to the console so we can see how far we've gone print(url, end='\r') # And now download the url to a location inside our new directory urllib.request.urlretrieve(url, os.path.join('img_align_celeba', f)) """ Explanation: After exectuing this cell, your kernel will have access to everything inside the os library which is a common library for interacting with the operating system. We'll need to use the import statement for all of the libraries that we include. <a name="loading-data"></a> Loading Data Let's now move onto something more practical. We'll learn how to see what files are in a directory, and load any images inside that directory into a variable. <a name="structuring-data-as-folders"></a> Structuring data as folders With Deep Learning, we'll always need a dataset, or a collection of data. A lot of it. We're going to create our dataset by putting a bunch of images inside a directory. Then, whenever we want to load the dataset, we will tell python to find all the images inside the directory and load them. Python lets us very easily crawl through a directory and grab each file. Let's have a look at how to do this. <a name="using-the-os-library-to-get-data"></a> Using the os library to get data We'll practice with a very large dataset called Celeb Net. This dataset has about 200,000 images of celebrities. The researchers also provide a version of the dataset which has every single face cropped and aligned so that each face is in the middle! We'll be using this aligned dataset. To read more about the dataset or to download it, follow the link here: http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html For now, we're not going to be using the entire dataset but just a subset of it. Run the following cell which will download the first 10 images for you: End of explanation """ help(os.listdir) """ Explanation: Using the os package, we can list an entire directory. The documentation or docstring, says that listdir takes one parameter, path: End of explanation """ files = os.listdir('img_align_celeba') """ Explanation: This is the location of the directory we need to list. Let's try this with the directory of images we just downloaded: End of explanation """ [file_i for file_i in os.listdir('img_align_celeba') if '.jpg' in file_i] """ Explanation: We can also specify to include only certain files like so: End of explanation """ [file_i for file_i in os.listdir('img_align_celeba') if '.jpg' in file_i and '00000' in file_i] """ Explanation: or even: End of explanation """ [file_i for file_i in os.listdir('img_align_celeba') if '.jpg' in file_i or '.png' in file_i or '.jpeg' in file_i] """ Explanation: We could also combine file types if we happened to have multiple types: End of explanation """ files = [file_i for file_i in os.listdir('img_align_celeba') if file_i.endswith('.jpg')] """ Explanation: Let's set this list to a variable, so we can perform further actions on it: End of explanation """ print(files[0]) print(files[1]) """ Explanation: And now we can index that list using the square brackets: End of explanation """ print(files[-1]) print(files[-2]) """ Explanation: We can even go in the reverse direction, which wraps around to the end of the list: End of explanation """ import matplotlib.pyplot as plt """ Explanation: <a name="loading-an-image"></a> Loading an image matplotlib is an incredibly powerful python library which will let us play with visualization and loading of image data. We can import it like so: End of explanation """ %matplotlib inline """ Explanation: Now we can refer to the entire module by just using plt instead of matplotlib.pyplot every time. This is pretty common practice. We'll now tell matplotlib to "inline" plots using an ipython magic function: End of explanation """ # help(plt) # plt.<tab> """ Explanation: This isn't python, so won't work inside of any python script files. This only works inside notebook. What this is saying is that whenever we plot something using matplotlib, put the plots directly into the notebook, instead of using a window popup, which is the default behavior. This is something that makes notebook really useful for teaching purposes, as it allows us to keep all of our images/code in one document. Have a look at the library by using plt: End of explanation """ plt.imread? """ Explanation: plt contains a very useful function for loading images: End of explanation """ import numpy as np # help(np) # np.<tab> """ Explanation: Here we see that it actually returns a variable which requires us to use another library, NumPy. NumPy makes working with numerical data a lot easier. Let's import it as well: End of explanation """ # img = plt.imread(files[0]) # outputs: FileNotFoundError """ Explanation: Let's try loading the first image in our dataset: We have a list of filenames, and we know where they are. But we need to combine the path to the file and the filename itself. If we try and do this: End of explanation """ print(os.path.join('img_align_celeba/', files[0])) plt.imread(os.path.join('img_align_celeba/', files[0])) """ Explanation: plt.imread will not know where that file is. We can tell it where to find the file by using os.path.join: End of explanation """ files = [os.path.join('img_align_celeba', file_i) for file_i in os.listdir('img_align_celeba') if '.jpg' in file_i] files """ Explanation: Now we get a bunch of numbers! I'd rather not have to keep prepending the path to my files, so I can create the list of files like so: End of explanation """ img = plt.imread(files[0]) # img.<tab> img """ Explanation: Let's set this to a variable, img, and inspect a bit further what's going on: End of explanation """ img = plt.imread(files[5]) plt.imshow(img) """ Explanation: <a name="rgb-image-representation"></a> RGB Image Representation It turns out that all of these numbers are capable of describing an image. We can use the function imshow to see this: End of explanation """ img.shape # outputs: (218, 178, 3) """ Explanation: Let's break this data down a bit more. We can see the dimensions of the data using the shape accessor: End of explanation """ #dja from matplotlib.colors import LinearSegmentedColormap cdict = { 'red': ((0.0, 0.0, 0.0), (1.0, 1.0, 1.0)), 'green': ((0.0, 0.0, 0.0), (1.0, 0.0, 0.0)), 'blue': ((0.0, 0.0, 0.0), (1.0, 0.0, 0.0)) } blackred = LinearSegmentedColormap('blackred', cdict) plt.figure() plt.imshow(img[:, :, 0], cmap=blackred) plt.figure() plt.imshow(img[:, :, 1], cmap="gray") plt.figure() plt.imshow(img[:, :, 2]) """ Explanation: This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels. We can use the square brackets just like when we tried to access elements of our list: End of explanation """ np.min(img), np.max(img) """ Explanation: We use the special colon operator to say take every value in this dimension. This is saying, give me every row, every column, and the 0th dimension of the color channels. What we see now is a heatmap of our image corresponding to each color channel. <a name="understanding-data-types-and-ranges-uint8-float32"></a> Understanding data types and ranges (uint8, float32) Let's take a look at the range of values of our image: End of explanation """ 2**32 """ Explanation: The numbers are all between 0 to 255. What a strange number you might be thinking. Unless you are one of 10 types of people in this world, those that understand binary and those that don't. Don't worry if you're not. You are likely better off. 256 values is how much information we can stick into a byte. We measure a byte using bits, and each byte takes up 8 bits. Each bit can be either 0 or 1. When we stack up 8 bits, or 10000000 in binary, equivalent to 2 to the 8th power, we can express up to 256 possible values, giving us our range, 0 to 255. You can compute any number of bits using powers of two. 2 to the power of 8 is 256. How many values can you stick in 16 bits (2 bytes)? Or 32 bits (4 bytes) of information? Let's ask python: End of explanation """ img.dtype """ Explanation: numpy arrays have a field which will tell us how many bits they are using: dtype: End of explanation """ img.astype(np.float32) """ Explanation: uint8: Let's decompose that: unsigned, int, 8. That means the values do not have a sign, meaning they are all positive. They are only integers, meaning no decimal places. And that they are all 8 bits. Something which is 32-bits of information can express a single value with a range of nearly 4.3 billion different possibilities (2**32). We'll generally need to work with 32-bit data when working with neural networks. In order to do that, we can simply ask numpy for the correct data type: End of explanation """ plt.imread(files[0]) """ Explanation: This is saying, let me see this data as a floating point number, meaning with decimal places, and with 32 bits of precision, rather than the previous data types 8 bits. This will become important when we start to work with neural networks, as we'll need all of those extra possible values! <a name="visualizing-your-data-as-images"></a> Visualizing your data as images We've seen how to look at a single image. But what if we have hundreds, thousands, or millions of images? Is there a good way of knowing what our dataset looks like without looking at their file names, or opening up each image one at a time? One way we can do that is to randomly pick an image. We've already seen how to read the image located at one of our file locations: End of explanation """ print(np.random.randint(0, len(files))) print(np.random.randint(0, len(files))) print(np.random.randint(0, len(files))) """ Explanation: to pick a random image from our list of files, we can use the numpy random module: End of explanation """ filename = files[np.random.randint(0, len(files))] img = plt.imread(filename) plt.imshow(img) """ Explanation: This function will produce random integers between a range of values that we specify. We say, give us random integers from 0 to the length of files. We can now use the code we've written before to show a random image from our list of files: End of explanation """ def plot_image(filename): img = plt.imread(filename) plt.imshow(img) """ Explanation: This might be something useful that we'd like to do often. So we can use a function to help us in the future: End of explanation """ f = files[np.random.randint(0, len(files))] plot_image(f) """ Explanation: This function takes one parameter, a variable named filename, which we will have to specify whenever we call it. That variable is fed into the plt.imread function, and used to load an image. It is then drawn with plt.imshow. Let's see how we can use this function definition: End of explanation """ plot_image(files[np.random.randint(0, len(files))]) """ Explanation: or simply: End of explanation """ def imcrop_tosquare(img): """Make any image a square image. Parameters ---------- img : np.ndarray Input image to crop, assumed at least 2d. Returns ------- crop : np.ndarray Cropped image. """ if img.shape[0] > img.shape[1]: extra = (img.shape[0] - img.shape[1]) if extra % 2 == 0: crop = img[extra // 2:-extra // 2, :] else: crop = img[max(0, extra // 2 - 1):min(-1, -extra // 2), :] elif img.shape[1] > img.shape[0]: extra = (img.shape[1] - img.shape[0]) if extra % 2 == 0: crop = img[:, extra // 2:-extra // 2] else: crop = img[:, max(0, extra // 2 - 1):min(-1, -extra // 2)] else: crop = img return crop """ Explanation: We use functions to help us reduce the main flow of our code. It helps to make things clearer, using function names that help describe what is going on. <a name="image-manipulation"></a> Image Manipulation <a name="cropping-images"></a> Cropping images We're going to create another function which will help us crop the image to a standard size and help us draw every image in our list of files as a grid. In many applications of deep learning, we will need all of our data to be the same size. For images this means we'll need to crop the images while trying not to remove any of the important information in it. Most image datasets that you'll find online will already have a standard size for every image. But if you're creating your own dataset, you'll need to know how to make all the images the same size. One way to do this is to find the longest edge of the image, and crop this edge to be as long as the shortest edge of the image. This will convert the image to a square one, meaning its sides will be the same lengths. The reason for doing this is that we can then resize this square image to any size we'd like, without distorting the image. Let's see how we can do that: End of explanation """ def imcrop(img, amt): if amt <= 0: return img row_i = int(img.shape[0] * amt) // 2 col_i = int(img.shape[1] * amt) // 2 return img[row_i:-row_i, col_i:-col_i] """ Explanation: There are a few things going on here. First, we are defining a function which takes as input a single variable. This variable gets named img inside the function, and we enter a set of if/else-if conditionals. The first branch says, if the rows of img are greater than the columns, then set the variable extra to their difference and divide by 2. The // notation means to perform an integer division, instead of a floating point division. So 3 // 2 = 1, not 1.5. We need integers for the next line of code which says to set the variable crop to img starting from extra rows, and ending at negative extra rows down. We can't be on row 1.5, only row 1 or 2. So that's why we need the integer divide there. Let's say our image was 128 x 96 x 3. We would have extra = (128 - 96) // 2, or 16. Then we'd start from the 16th row, and end at the -16th row, or the 112th row. That adds up to 96 rows, exactly the same number of columns as we have. Let's try another crop function which can crop by an arbitrary amount. It will take an image and a single factor from 0-1, saying how much of the original image to crop: End of explanation """ #from scipy.<tab>misc import <tab>imresize """ Explanation: <a name="resizing-images"></a> Resizing images For resizing the image, we'll make use of a python library, scipy. Let's import the function which we need like so: End of explanation """ from scipy.misc import imresize imresize? """ Explanation: Notice that you can hit tab after each step to see what is available. That is really helpful as I never remember what the exact names are. End of explanation """ square = imcrop_tosquare(img) crop = imcrop(square, 0.2) rsz = imresize(crop, (64, 64)) plt.imshow(rsz) """ Explanation: The imresize function takes a input image as its first parameter, and a tuple defining the new image shape as rows and then columns. Let's see how our cropped image can be imresized now: End of explanation """ plt.imshow(rsz, interpolation='nearest') """ Explanation: Great! To really see what's going on, let's turn off the interpolation like so: End of explanation """ mean_img = np.mean(rsz, axis=2) print(mean_img.shape) plt.imshow(mean_img, cmap='gray') """ Explanation: Each one of these squares is called a pixel. Since this is a color image, each pixel is actually a mixture of 3 values, Red, Green, and Blue. When we mix those proportions of Red Green and Blue, we get the color shown here. We can combine the Red Green and Blue channels by taking the mean, or averaging them. This is equivalent to adding each channel, R + G + B, then dividing by the number of color channels, (R + G + B) / 3. We can use the numpy.mean function to help us do this: End of explanation """ imgs = [] for file_i in files: img = plt.imread(file_i) square = imcrop_tosquare(img) crop = imcrop(square, 0.2) rsz = imresize(crop, (64, 64)) imgs.append(rsz) print(len(imgs)) """ Explanation: This is an incredibly useful function which we'll revisit later when we try to visualize the mean image of our entire dataset. <a name="croppingresizing-images"></a> Cropping/Resizing Images We now have functions for cropping an image to a square image, and a function for resizing an image to any desired size. With these tools, we can begin to create a dataset. We're going to loop over our 10 files, crop the image to a square to remove the longer edge, and then crop again to remove some of the background, and then finally resize the image to a standard size of 64 x 64 pixels. End of explanation """ plt.imshow(imgs[0]) """ Explanation: We now have a list containing our images. Each index of the imgs list is another image which we can access using the square brackets: End of explanation """ imgs[0].shape """ Explanation: Since all of the images are the same size, we can make use of numpy's array instead of a list. Remember that an image has a shape describing the height, width, channels: End of explanation """ data = np.array(imgs) data.shape """ Explanation: <a name="the-batch-dimension"></a> The Batch Dimension there is a convention for storing many images in an array using a new dimension called the batch dimension. The resulting image shape should be: N x H x W x C The Number of images, or the batch size, is first; then the Height or number of rows in the image; then the Width or number of cols in the image; then finally the number of channels the image has. A Color image should have 3 color channels, RGB. A Grayscale image should just have 1 channel. We can combine all of our images to look like this in a few ways. The easiest way is to tell numpy to give us an array of all the images: End of explanation """ data = np.concatenate([img_i[np.newaxis] for img_i in imgs], axis=0) data.shape """ Explanation: We could also use the numpy.concatenate function, but we have to create a new dimension for each image. Numpy let's us do this by using a special variable np.newaxis End of explanation """
sdpython/ensae_teaching_cs
_doc/notebooks/td2a_ml/td2a_timeseries.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() %matplotlib inline """ Explanation: 2A.ml - Séries temporelles Prédictions sur des séries temporelles et autres opérations classiques. End of explanation """ import pyensae.datasource as ds ds.download_data('xavierdupre_sessions.csv', url='https://raw.githubusercontent.com/sdpython/ensae_teaching_cs/master/_doc/notebooks/td2a_ml/') import pandas data = pandas.read_csv("xavierdupre_sessions.csv", sep="\t") data.set_index("Date", inplace=True) data.plot(figsize=(12,4)) data[-365:].plot(figsize=(12,4)) """ Explanation: Une série temporelle On récupère le nombre de sessions d'un site web : xavierdupre_sessions.csv. End of explanation """ from statsmodels.tsa.tsatools import detrend """ Explanation: Exercice 1 : tendance Fonction detrend. Enlever la tendance. End of explanation """ from statsmodels.tsa.seasonal import seasonal_decompose from seasonal import fit_seasons """ Explanation: Exercice 2 : enlever la saisonnalité Avec deux fonctions seasonal_decompose ou fit_seasons End of explanation """ import matplotlib.pyplot as plt from statsmodels.graphics.tsaplots import plot_acf, plot_pacf """ Explanation: Exercice 3 : Auto-corrélograme, périodogramme On s'inspire de l'exemple : Autoregressive Moving Average (ARMA): Sunspots data. End of explanation """
satishkt/ML-Foundations-Coursera
Week4-Clustering/Document retrieval.ipynb
bsd-2-clause
import graphlab """ Explanation: Document retrieval from wikipedia data Fire up GraphLab Create End of explanation """ people = graphlab.SFrame('people_wiki.gl/') """ Explanation: Load some text data - from wikipedia, pages on people End of explanation """ people.head() len(people) """ Explanation: Data contains: link to wikipedia article, name of person, text of article. End of explanation """ obama = people[people['name'] == 'Barack Obama'] obama obama['text'] """ Explanation: Explore the dataset and checkout the text it contains Exploring the entry for president Obama End of explanation """ clooney = people[people['name'] == 'George Clooney'] clooney['text'] """ Explanation: Exploring the entry for actor George Clooney End of explanation """ obama['word_count'] = graphlab.text_analytics.count_words(obama['text']) print obama['word_count'] """ Explanation: Get the word counts for Obama article End of explanation """ obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count']) """ Explanation: Sort the word counts for the Obama article Turning dictonary of word counts into a table End of explanation """ obama_word_count_table.head() obama_word_count_table.sort('count',ascending=False) """ Explanation: Sorting the word counts to show most common words at the top End of explanation """ people['word_count'] = graphlab.text_analytics.count_words(people['text']) people.head() tfidf = graphlab.text_analytics.tf_idf(people['word_count']) tfidf[0] people['tfidf'] = tfidf['docs'] """ Explanation: Most common words include uninformative words like "the", "in", "and",... Compute TF-IDF for the corpus To give more weight to informative words, we weigh them by their TF-IDF scores. End of explanation """ obama = people[people['name'] == 'Barack Obama'] obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False) """ Explanation: Examine the TF-IDF for the Obama article End of explanation """ clinton = people[people['name'] == 'Bill Clinton'] beckham = people[people['name'] == 'David Beckham'] """ Explanation: Words with highest TF-IDF are much more informative. Manually compute distances between a few people Let's manually compare the distances between the articles for a few famous people. End of explanation """ graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0]) graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0]) """ Explanation: Is Obama closer to Clinton than to Beckham? We will use cosine distance, which is given by (1-cosine_similarity) and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham. End of explanation """ knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name') """ Explanation: Build a nearest neighbor model for document retrieval We now create a nearest-neighbors model and apply it to document retrieval. End of explanation """ knn_model.query(obama) """ Explanation: Applying the nearest-neighbors model for retrieval Who is closest to Obama? End of explanation """ swift = people[people['name'] == 'Taylor Swift'] knn_model.query(swift) jolie = people[people['name'] == 'Angelina Jolie'] knn_model.query(jolie) arnold = people[people['name'] == 'Arnold Schwarzenegger'] knn_model.query(arnold) elton_john = people[people['name'] == 'Elton John'] elton_john #stack('word_count', new_column_name = ['word','count']) elton_john_wc_table = elton_john[['word_count']].stack('word_count',new_column_name = ['word','count']) elton_john_wc_table elton_john_wc_table.sort('count',ascending=False) elton_john_tfidf_table = elton_john[['tfidf']].stack('tfidf',new_column_name= ['tf-idf','count']) elton_john_tfidf_table.sort('count',ascending=False) vic_beckham = people[people['name'] == 'Victoria Beckham'] graphlab.distances.cosine(elton_john['tfidf'][0],vic_beckham['tfidf'][0]) paul_m = people[people['name']== 'Paul McCartney'] graphlab.distances.cosine(elton_john['tfidf'][0],paul_m['tfidf'][0]) knn_model_tfidf = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name',distance='cosine') knn_model_wc= graphlab.nearest_neighbors.create(people,features=['word_count'],label = 'name',distance='cosine') knn_model_wc.query(elton_john) knn_model_tfidf.query(elton_john) knn_model_wc.query(vic_beckham) knn_model_tfidf.query(vic_beckham) """ Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians. Other examples of document retrieval End of explanation """
DataReply/persistable
examples/Persistable.ipynb
gpl-3.0
# Persistable Class: from persistable import Persistable # Set a persistable top path: from pathlib import Path LOCALDATAPATH = Path('.').absolute() """ Explanation: Introduction: This material has been used in the past to teach colleagues in our group how to use persistable. The persistable package provides a general loggable superclass that provides Python users a simple way to persist load calculations and track corresponding calculation parameters. Inheriting from Persistable automatically spools a logger and appends the PersistLoad object for easy and reproducible data persistance with loading, with parameter tracking. The PersistLoad object is based on setting a workingdatadir within which all persisted data is saved and logs are stored. Such a directory acts as a home for a specific set of experiments. For more details, read the docs. Imports: End of explanation """ params = { "hello": "world", "another_dict": { "test": [1,2,3] }, "a": 1, "b": 4 } p = Persistable( payload_name="first_payload", params=params, workingdatapath=LOCALDATAPATH / "knowledgeshare_20170929" # object will live in this local disk location ) """ Explanation: Instantiate Persistable: Each persistable object is instantiated with parameters that should uniquely (or nearly uniquely) define the payload. End of explanation """ # ML Example: """ def _generate_payload(self): X = pd.read_csv(self.params['datafile']) model = XGboost(X) model.fit() self.payload['model'] = model """ # Silly Example: def _generate_payload(self): self.payload['sum'] = self.params['a'] + self.params['b'] self.payload['msg'] = self.params['hello'] """ Explanation: Define Payload: Payloads are defined by overriding the _generate_payload function: Payload defined by _generate_payload function: Simply override _generate_payload to give the Persistable object generate functionality. Note that generate here means to create the payload. The term is not meeant to indicate that a python generator is being produced. End of explanation """ def bind(instance, method): def binding_scope_fn(*args, **kwargs): return method(instance, *args, **kwargs) return binding_scope_fn p._generate_payload = bind(p, _generate_payload) p.generate() """ Explanation: Now we will monkeypatch the payload generator to override its counterpart in Persistable object (only necessary because we've defined the generator outside of an IDE). End of explanation """ class SillyPersistableExample(Persistable): def _generate_payload(self): self.payload['sum'] = self.params['a'] + self.params['b'] self.payload['msg'] = self.params['hello'] p2 = SillyPersistableExample(payload_name="silly_example", params=params, workingdatapath=LOCALDATAPATH / "knowledgeshare_20170929") p2.generate() """ Explanation: Persistable as a Super Class: The non Monkey Patching equivalent to what we did above: End of explanation """ p_test = Persistable( "first_payload", params=params, workingdatapath=LOCALDATAPATH/"knowledgeshare_20170929" ) p_test.load() p_test.payload """ Explanation: Load: End of explanation """
gregmedlock/Medusa
docs/machine_learning.ipynb
mit
import medusa from medusa.test import create_test_ensemble ensemble = create_test_ensemble("Staphylococcus aureus") """ Explanation: Applying machine learning to guide ensemble curation An ensemble of models can be though of as a set of feasible hypotheses about how a system behaves. From a machine learning perspective, these hypotheses can alternatively be viewed as samples (or observations), each of which has a distinct set of features (i.e. the model components that vary across an ensemble) and can further generate new features by performing simulations. An example of the analyses enabled by this view of ensembles can be found in Medlock & Papin, where ensemble structure and ensemble simulations are used to identify reactions that are high-priority targets for curation. In this example, we demonstrate how ensembles of genome-scale metabolic models and machine learning can be combined to identify reactions that might strongly influence a single prediction (flux through biomass). We will use an ensemble for Staphylococcus aureus that contains 1000 members. The ensemble was generated through iterative gapfilling to enable growth on single C/N media conditions using a draft reconstruction from ModelSEED. End of explanation """ from medusa.flux_analysis import flux_balance biomass_fluxes = flux_balance.optimize_ensemble(ensemble, return_flux="bio1", num_processes = 4) """ Explanation: Using the ensemble, we'll perform flux balance analysis and return flux through the biomass reaction (which has an ID of "bio1"). The ensemble already has the media conditions set as "complete", meaning the boundary reactions for all transportable metabolites are open (e.g. the lower bound of all exchange reactions is -1000). End of explanation """ biomass_fluxes.head(10) """ Explanation: The optimize_ensemble function returns a pandas DataFrame, where each column is a reaction and each row is an ensemble member. For illustration, here are the values for the first 10 members of the ensemble: End of explanation """ import matplotlib.pylab as plt fig, ax = plt.subplots() plt.hist(biomass_fluxes['bio1']) ax.set_ylabel('# ensemble members') ax.set_xlabel('Flux through biomass reaction') plt.show() """ Explanation: To get a sense for the distribution of biomass flux predictions, we can visualize them with matplotlib: End of explanation """ import sklearn from sklearn.ensemble import RandomForestRegressor """ Explanation: As you can see, there is quite a bit of variation in the maximum flux through biomass! Keep in mind that this is an ensemble of gapfilled reconstructions with no manual curation, and that none of the uptake rates are reallistically constrained, so these predictions are unrealistically high (100 units of flux through biomass is a doubling time of 36 seconds, at least an order of magnitude faster than even the fittest E. coli grown in vitro). Our goal now is to identify which features in the ensemble are predictive of flux through biomass. If we can identify these reactions, then turn to the literature or perform an experiment to figure out whether they are really catalyzed by the organism, we can greatly reduce the uncertainty in our predictions of biomass flux! Given that we have a continous output, our problem can be addressed using regression. We will use the binary presence/absence of each reaction in each member of the ensemble as input to a random forest regressor, implemented in scikit-learn. Many supervised regression models will work for this analysis, but random forest is particularly easy to understand and interpret when the input is binary (i.e. reaction presence/absence). End of explanation """ # Grab the features and states for the ensemble and convert to a dataframe import pandas as pd feature_dict = {} for feature in ensemble.features: feature_dict[feature.id] = feature.states feature_frame = pd.DataFrame.from_dict(feature_dict) # convert the presence and absence of features to a boolean value feature_frame = feature_frame.astype(bool) # extract biomass and add it to the dataframe, keeping track of the feature names input_cols = feature_frame.columns biomass_fluxes.index = [member_id for member_id in biomass_fluxes.index] feature_frame['bio1'] = biomass_fluxes['bio1'] """ Explanation: We reformat the data here, getting the feature states for each ensemble member and converting them to True/False, then combine them into a single DataFrame with the biomass flux predictions for matched members. End of explanation """ # create a regressor to predict biomass flux from reaction presence/absence regressor = RandomForestRegressor(n_estimators=100,oob_score=True) fit_regressor = regressor.fit(X=feature_frame[input_cols],y=feature_frame['bio1']) fit_regressor.oob_score_ """ Explanation: Now we actually construct and fit the random forest regressor, using 100 total trees in the forest. The oob_score_ reported here is the coefficient of determination (R<sup>2</sup>) calculated using the out-of-bag samples for each tree. As a reminder, R<sup>2</sup> varies from 0 to 1.0, where 1.0 is a perfect fit. End of explanation """ imp_frame = pd.DataFrame(fit_regressor.feature_importances_, index=feature_frame[input_cols].columns).sort_values( by=0,ascending=False) imp_frame.columns = ['importance'] imp_frame.head(10) """ Explanation: With a reasonably-performing regressor in hand, we can inspect the important features to identify reactions that contribute to uncertainty in biomass flux predictions. End of explanation """ for member in ensemble.features.get_by_id('rxn01640_c_lower_bound').states: ensemble.features.get_by_id('rxn01640_c_lower_bound').states[member] = 0 ensemble.features.get_by_id('rxn01640_c_upper_bound').states[member] = 0 biomass_fluxes_post_curation = flux_balance.optimize_ensemble(ensemble,return_flux="bio1") import matplotlib.pylab as plt import numpy as np fig, ax = plt.subplots() # declare specific bins for our histogram bins=np.histogram(np.hstack((biomass_fluxes['bio1'],biomass_fluxes_post_curation['bio1'])), bins=15)[1] plt.hist(biomass_fluxes['bio1'], bins, label = 'Original', alpha = 0.3, color='black') plt.hist(biomass_fluxes_post_curation['bio1'], bins, label = 'Post-curation', alpha = 0.4, color = 'green') plt.axvline(x=biomass_fluxes['bio1'].mean(), c = 'black', alpha = 0.6) plt.axvline(x=biomass_fluxes_post_curation['bio1'].mean(), c = 'green') ax.set_ylabel('# ensemble members') ax.set_xlabel('Flux through biomass reaction') plt.legend(loc='upper right') plt.savefig('post_FBA_curation.svg') plt.savefig('post_FBA_curation.png') plt.show() """ Explanation: With the list of important features in hand, the first thing we should do is turn to the literature to see if someone else has already figured out whether these reactions are present or absent in Staphylococcus aureus. The top reaction, rxn01640, is N-Formimino-L-glutamate iminohydrolase, which is part of the histidine utilization pathway. A quick consultation with a review on the regulation of histidine utilization in bacteria suggests that the enzyme for this reaction, encoded by the hutF gene, is widespread and conserved amongst bacteria. However, the hutF gene is part of a second, less common pathway that branches off of the primary histidine utilization pathway. If we consult PATRIC with a search for the hutF gene, we see that, although the gene is widespread, there is no predicted hutF gene in any sequenced Staphylococcus aureus genome. Although absence of evidence is not evidence of absence, we can be relatively confident that hutF is not encoded in the Staphylococcus aureus genome, given how well-studied this pathogen is. What happens if we "correct" this issue in the ensemble? Let's inactivate the lower and upper bound for the reaction in all the members, then perform flux balance analysis again. End of explanation """
abotero/text-mining-amazon-reviews
final-project-part3and4.ipynb
mit
import numpy as np import pandas as pd import gzip import json import gzip import matplotlib %matplotlib inline import matplotlib.pyplot as plt matplotlib.style.use('ggplot') pd.set_option('display.max_colwidth', -1) #Some functions to handle files def parse(path): with open(path) as data_file: for d in data_file: yield eval(d) def getDF(path): i = 0 df = {} for d in parse(path): df[i] = d i += 1 return pd.DataFrame.from_dict(df, orient='index') """ Explanation: TEXT MINING AMAZON REVIEWS By Andrew Botero PART 3 - Exploratory Data Analysis End of explanation """ %%time #Load the dataset df = getDF('data/Electronics_5_sample_400k.json') """ Explanation: Let's load the data in a Pandas dataframe, the data can be downloaded from http://jmcauley.ucsd.edu/data/amazon/, I have uncompressed it in my data directory. End of explanation """ df.head() df.shape """ Explanation: Let's take a quick look at the data: End of explanation """ #transform the Unix timestamp column to date from datetime import datetime df['date'] = pd.to_datetime(df['unixReviewTime'],unit='s') """ Explanation: Let's convert the unixReviewTime to date type. End of explanation """ # Identifiy missing values null_data = df[df.isnull().any(axis=1)] print 'There are {} rows with missing values'.format(len(null_data.index)) [col for col in null_data.columns if df[col].isnull().any()] """ Explanation: Now let's see if there are any null values in the data. End of explanation """ # Review length distribution df['reviewLength'] = df['reviewText'].apply(len) df['reviewWordCount'] = df['reviewText'].map(lambda x: len(str.split(x))) #Reviews over 1000 character are quite rare so don't want to include them in the plot ax = df[df['reviewLength'] < 1000]['reviewLength'].plot(kind='hist') ax.set_xlabel("Review Length") ax.set_ylabel("Frequency") plt.title('Review Length Distribution') plt.show() ax = df[df['reviewWordCount'] < 600]['reviewWordCount'].plot(kind='hist') ax.set_xlabel("Word Count") ax.set_ylabel("Frequency") plt.title('Word Count Distribution') plt.show() """ Explanation: Since the null column is reviewerName and we are interested in the review text, we will keep these rows. Let's look at how long are the reviews. End of explanation """ df['overall'].describe() ax = df['overall'].value_counts().plot.bar() ax.set_xlabel("Rating") ax.set_ylabel("Frequency") plt.title('Rating Distribution') plt.show() """ Explanation: Let's see summarize the overall scores End of explanation """ g1 = df.groupby(["asin", "overall"]).size().reset_index(name='count') g1 = g1.sort_values(by=['count'], ascending=[False]) #Top 10 products with most reviews g1.head(10) g1.tail(10) ax = g1[g1['count'] < 30].boxplot(column='count', by='overall') ax.set_xlabel("Overall rating") ax.set_ylabel("Counts") plt.title('') plt.show() """ Explanation: Let's see if there is a relation between the number of reviews and the score. End of explanation """ #import nltk #nltk.download('vader_lexicon') %%time from nltk.sentiment.vader import SentimentIntensityAnalyzer SIA = SentimentIntensityAnalyzer() #Compound score positive: score >= 0.5, negative score <= -0.5, neutral (compound score > -0.5) and (compound score < 0.5) df = df.merge(df.reviewText.apply(lambda x: pd.Series({ 'vader_compound': SIA.polarity_scores(x)['compound'], 'vader_pos': SIA.polarity_scores(x)['pos'], 'vader_neg': SIA.polarity_scores(x)['neg'] })),left_index=True, right_index=True) ax = df.boxplot(column=['vader_compound'], by='overall') ax.set_xlabel("Overall rating") ax.set_ylabel("Compound Score") plt.title('') plt.show() ax = df.boxplot(column=['vader_pos', 'vader_neg'], by='overall') plt.show() """ Explanation: It seems to be a relation between the high score and the number of reviews. Let's use the vader lexicon to make some analysis End of explanation """ def get_sentiment(row): if row['vader_compound'] >= 0.5 or row['overall'] > 3: return 'Positive' elif row['vader_compound'] <= -0.5 or row['overall'] < 3: return 'Negative' else: return 'Neutral' df['sentiment'] = df.apply(lambda row: get_sentiment(row), axis=1) df[['overall', 'vader_compound', 'reviewText', 'sentiment']].head(30) """ Explanation: Let's label our dataset End of explanation """ from wordcloud import WordCloud,STOPWORDS stopwords = set(STOPWORDS) # Transform to single string positive_reviews_str = df[df['sentiment'] == 'Positive'].reviewText.str.cat() # Create wordclouds wordcloud_positive = WordCloud( background_color='white', stopwords=stopwords, max_words=200, max_font_size=40, scale=3, random_state=1 # chosen at random by flipping a coin; it was heads ).generate(positive_reviews_str) fig = plt.figure(figsize=(30,10)) ax1 = fig.add_subplot(211) ax1.imshow(wordcloud_positive,interpolation='bilinear') ax1.axis("off") ax1.set_title('Reviews with Positive Scores', fontsize=20) plt.show() negative_reviews_str = df[df['sentiment'] == 'Negative'].reviewText.str.cat() wordcloud_negative = WordCloud( background_color='black', stopwords=stopwords, max_words=200, max_font_size=40, scale=3, random_state=1 # chosen at random by flipping a coin; it was heads ).generate(negative_reviews_str) fig = plt.figure(figsize=(30,10)) ax1 = fig.add_subplot(211) ax1.imshow(wordcloud_negative,interpolation='bilinear') ax1.axis("off") ax1.set_title('Reviews with Negative Scores', fontsize=20) plt.show() """ Explanation: Let's visualize the most common words, if you use anaconde install the wordcloud package using conda install -c conda-forge wordcloud=1.2.1. End of explanation """ df.groupby(['sentiment']).size() """ Explanation: Overall Sentiment End of explanation """ from sklearn.model_selection import train_test_split from sklearn import feature_extraction, ensemble, cross_validation, metrics from sklearn.metrics import confusion_matrix """ Explanation: PART 4 - Modelling Performance End of explanation """ df = df[df['sentiment'] != 'Neutral'] df= df.reset_index(drop=True) """ Explanation: Let's focus only in the positive and negative reviews. End of explanation """ df['label'] = df['sentiment'].map(lambda x: 1 if x == "Positive" else 0) train, test = train_test_split(df, test_size=0.2) %%time vectorizer = feature_extraction.text.CountVectorizer(analyzer = "word", stop_words = 'english', max_features = 1000, preprocessor = None,) vectorizer.fit(train.reviewText) x_train = vectorizer.transform(train.reviewText) (train.reviewText) x_test = vectorizer.transform(test.reviewText) y_train = train.label y_test = test.label prediction = dict() def print_confusion(y, y_hat): confusion = pd.crosstab(y, y_hat, rownames=['Predicted'], colnames=[' True'], margins=True) print(confusion) def print_score(model, x, y): print ('Model Score: {:2.4}%' . format(model.score(x, y)*100)) """ Explanation: For our model the sentiment column needs to be transformed into a binary column End of explanation """ %%time from sklearn.naive_bayes import MultinomialNB model = MultinomialNB().fit(x_train, y_train) prediction['Multinomial'] = model.predict(x_test) print_confusion(y_test, prediction['Multinomial']) print_score(model, x_train, y_train) """ Explanation: Multinomial Naïve Bayes learning method End of explanation """ %%time from sklearn.naive_bayes import BernoulliNB model = BernoulliNB().fit(x_train, y_train) prediction['Bernoulli'] = model.predict(x_test) print_confusion(y_test, prediction['Bernoulli']) print_score(model, x_train, y_train) """ Explanation: Bernoulli Naïve Bayes learning method End of explanation """ %%time from sklearn.linear_model import LogisticRegression logreg = LogisticRegression(C=1e5) logreg_result = logreg.fit(x_train, y_train) prediction['Logistic'] = logreg.predict(x_test) print_confusion(y_test, prediction['Logistic']) print_score(logreg, x_train, y_train) """ Explanation: Logistic regression End of explanation """ %%time from sklearn import svm svc_model = svm.LinearSVC(penalty = 'l1', dual=False, C=1.0, random_state=2016) svc_model.fit(x_train, y_train) prediction['SVM'] = svc_model.predict(x_test) print_confusion(y_test, prediction['SVM']) print_score(svc_model, x_train, y_train) """ Explanation: Support Vector Machine End of explanation """ from sklearn.ensemble import RandomForestClassifier %%time rfc_model = RandomForestClassifier(n_estimators = 100, class_weight='balanced', random_state = 2016) rfc_model.fit(x_train, y_train) prediction['Random Forest'] = rfc_model.predict(x_test) %%time from sklearn.model_selection import cross_val_score scores = cross_val_score(rfc_model, x_train, y_train, scoring = "roc_auc") print("CV AUC {}, Average AUC {}".format(scores, scores.mean())) print_confusion(y_test, prediction['Random Forest']) print_score(rfc_model, x_train, y_train) """ Explanation: Random Forest End of explanation """ from sklearn.metrics import roc_curve, auc def plot_roc_auc (y, prediction, title_text): cmp = 0 colors = ['g', 'o', 'y', 'k', 'm' ] for model, predicted in prediction.items(): false_positive_rate, true_positive_rate, thresholds = roc_curve(y, predicted) roc_auc = auc(false_positive_rate, true_positive_rate) plt.plot(false_positive_rate, true_positive_rate, colors[cmp], label='%s: AUC %0.2f'% (model,roc_auc)) cmp += 1 plt.title(title_text) plt.legend(loc='lower right') plt.plot([0,1],[0,1],'r--') plt.xlim([-0.1,1.2]) plt.ylim([-0.1,1.2]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() plot_roc_auc (y_test, prediction, 'Classifiers comparison with ROC') """ Explanation: Plot ROC AUC End of explanation """
phungkh/phys202-2015-work
assignments/assignment12/FittingModelsEx01.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt """ Explanation: Fitting Models Exercise 1 Imports End of explanation """ a_true = 0.5 b_true = 2.0 c_true = -4.0 """ Explanation: Fitting a quadratic curve For this problem we are going to work with the following model: $$ y_{model}(x) = a x^2 + b x + c $$ The true values of the model parameters are as follows: End of explanation """ N=30 SD=2.0 x = np.linspace(-5,5,N) y =a_true*x**2 + b_true*x + c_true +np.random.normal(0,SD,N) plt.scatter(x,y) assert True # leave this cell for grading the raw data generation and plot """ Explanation: First, generate a dataset using this model using these parameters and the following characteristics: For your $x$ data use 30 uniformly spaced points between $[-5,5]$. Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal). After you generate the data, make a plot of the raw data (use points). End of explanation """ def ymodel(x,a,b,c): return a*x**2 + b*x + c theta_best, theta_cov = opt.curve_fit(ymodel, x, y, sigma=SD) print('a = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0]))) print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1]))) print('c = {0:.3f} +/- {1:.3f}'.format(theta_best[2], np.sqrt(theta_cov[2,2]))) x=np.linspace(-5,5,30) yfit = theta_best[0]*x**2 + theta_best[1]*x + theta_best[2] plt.figure(figsize=(10,6,)) plt.plot(x, yfit) plt.errorbar(x, y, 2.0,fmt='.k', ecolor='lightgray') plt.xlabel('x') plt.ylabel('y') plt.box(False) plt.ylim(-10,25) plt.title('Best fit') assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors """ Explanation: Now fit the model to the dataset to recover estimates for the model's parameters: Print out the estimates and uncertainties of each parameter. Plot the raw data and best fit of the model. End of explanation """
DanielFGomez/Tarea2MetodosAvanzados
Punto2.ipynb
mit
fig,ax=subplots(3,3,figsize=(10, 10)) n=1 for i in range(3): for j in range(3): ax[i,j].scatter(X[:,0],X[:,n],c=Y) n+=1 Xnorm=sklearn.preprocessing.normalize(X) pca=sklearn.decomposition.PCA() pca.fit(Xnorm) fig,ax=subplots(1,3,figsize=(16, 4)) ax[0].scatter(pca.transform(X)[:,0],Y,c=Y) ax[0].set_xlabel('Componente principal') ax[0].set_ylabel('Tipo de vino') ax[1].scatter(pca.transform(X)[:,0],pca.transform(X)[:,1],c=Y) ax[1].set_xlabel('Componente principal') ax[1].set_ylabel('segunda componente principal') ax[2].plot(pca.explained_variance_ratio_) xlabel('numero componentes') ylabel('Varianza explicada') """ Explanation: Podemos ver que el método no converge con el numero de clusters. Se estabiliza ligeramente con alrededor de 10 clusters, por lo que ese puede ser un numero util de clusters, sin embargo el desempeño sigue sin ser muy bueno y no hay ningun numero que sea evidentemente optimo. End of explanation """ n=50 N=10 treesScore=zeros(n) for k in range(N): for i,j in zip(logspace(0,1.5,n),range(n)): rf = RandomForestClassifier(n_estimators=int(i)) rf.fit(X,Y) treesScore[j]+=rf.score(X,Y)*1.0/N plot(logspace(0,2,n),treesScore) xscale('log') """ Explanation: Con un unico componente se logra explicar casi toda la varianza. Sin embargo incluso con esta separacion en componentes principales, los diferentes tipos de vino se sobrelapan especialmente en el rango de -0.05 a 0.05 en la componente principal. Esto se ve un poco mejor en la figura del medio donde se grafican las dos componentes principales. Si bien se puede distinguir un partron, los componentes principales no separan a los puntos verdes de los rojos. Esto significa que PCA no resulta especialmente util para distinguir entre los distintos tipos de vino y clasificarlos. End of explanation """
ministryofjustice/opg-digi-deps-notebooks
notebooks/Digital Deputyship traffic distribution.ipynb
mit
import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Digital Deputyship traffic distribution As we don't have enough data per day to see usage pattern for the site, then we need to be creative. What if we import data from last month, and group it over the day of week or an hour of day? End of explanation """ data = pd.read_csv("Analytics All Web Site Data Audience Overview 20150905-20151005.csv", comment="#") data=data[:-1] data.describe() hour_start = "20150905" #hour_start = datetime.datetime.strptime(hour_start,"%Y%m%d") #seems like pandas is smart enough to process the data string so line above is not needed (for now) date_index = pd.date_range(hour_start, periods=len(data), freq='H') """ Explanation: Let's load data End of explanation """ #clean up data del data['Hour Index'] indexed_data = data.set_index(date_index) indexed_data.plot(title="sessions per hour over time period") # last_line = None # for index, row in indexed_data.iterrows(): # last_line = row # last_line.name.dayofweek # last_line.name.hour """ Explanation: Data cleaning End of explanation """ indexed_data.groupby(lambda x: x.dayofweek).mean().plot(kind="bar", title="sessions per week (0-Mon)") """ Explanation: sum of sessions per week day (0 - Mon) End of explanation """ indexed_data.groupby(lambda x: x.hour).mean().plot(kind="bar", title="sessions per hour") """ Explanation: sum of sessions per hour End of explanation """ heat = indexed_data.groupby(lambda x: (x.dayofweek, x.hour)).mean() mti = pd.MultiIndex.from_tuples(heat.index) mti_data = heat.set_index(mti) unstacked_data = mti_data.unstack(level=-1) plt.pcolor(unstacked_data.T, cmap=matplotlib.cm.Blues) plt.gca().invert_yaxis() plt.title('Sessions heatmap') plt.xlabel("day of week") plt.ylabel("hour") """ Explanation: sessions heatmap x axis - hour y axis - day of week End of explanation """
aam-at/tensorflow
tensorflow/python/ops/numpy_ops/g3doc/TensorFlow_NumPy_Text_Generation.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ import tensorflow as tf import tensorflow.experimental.numpy as tnp import numpy as np import os import time """ Explanation: Text generation with an RNN <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/text/text_generation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's The Unreasonable Effectiveness of Recurrent Neural Networks. Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly. Note: Enable GPU acceleration to execute this notebook faster. In Colab: Runtime > Change runtime type > Hardware acclerator > GPU. If running locally make sure TensorFlow version >= 2.4. This tutorial includes runnable code implemented using tf.experimental.numpy. The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q": <pre> QUEENE: I had thought thou hadst a Roman; for the oracle, Thus by All bids the man against the word, Which are so weak of care, by old care done; Your children were in your holy love, And the precipitation through the bleeding throne. BISHOP OF ELY: Marry, and will, my lord, to weep in such a one were prettiest; Yet now I was adopted heir Of the world's lamentable day, To watch the next way with his father with his face? ESCALUS: The cause why then we are all resolved more sons. VOLUMNIA: O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead, And love and pale as any will to that word. QUEEN ELIZABETH: But how long have I heard the soul for this world, And show his hands of life be proved to stand. PETRUCHIO: I say he look'd on, if I must be content To stay him from the fatal of our country's bliss. His lordship pluck'd from this sentence then for prey, And then let us twain, being the moon, were she such a case as fills m </pre> While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider: The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text. The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset. As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries End of explanation """ path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt') """ Explanation: Download the Shakespeare dataset Change the following line to run this code on your own data. End of explanation """ # Read, then decode for py2 compat. text = open(path_to_file, 'rb').read().decode(encoding='utf-8') # length of text is the number of characters in it print ('Length of text: {} characters'.format(len(text))) # Take a look at the first 250 characters in text print(text[:250]) # The unique characters in the file vocab = sorted(set(text)) print ('{} unique characters'.format(len(vocab))) """ Explanation: Read the data First, look in the text: End of explanation """ # Creating a mapping from unique characters to indices char2idx = {u:i for i, u in enumerate(vocab)} idx2char = np.array(vocab) text_as_int = np.array([char2idx[c] for c in text]) """ Explanation: Process the text Vectorize the text Before training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters. End of explanation """ # The maximum length sentence we want for a single input in characters seq_length = 100 examples_per_epoch = len(text)//(seq_length+1) # Create training examples / targets char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int) for i in char_dataset.take(5): print(idx2char[i.numpy()]) sequences = char_dataset.batch(seq_length+1, drop_remainder=True) for item in sequences.take(5): print(repr(''.join(idx2char[item.numpy()]))) def split_input_target(chunk): input_text = chunk[:-1] target_text = chunk[1:] return input_text, target_text dataset = sequences.map(split_input_target) for input_example, target_example in dataset.take(1): print ('Input data: ', repr(''.join(idx2char[input_example.numpy()]))) print ('Target data:', repr(''.join(idx2char[target_example.numpy()]))) """ Explanation: The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step. Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targets Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text. For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right. So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello". End of explanation """ for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])): print("Step {:4d}".format(i)) print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx]))) print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx]))) """ Explanation: Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the RNN considers the previous step context in addition to the current input character. End of explanation """ # Batch size BATCH_SIZE = 64 # Buffer size to shuffle the dataset # (TF data is designed to work with possibly infinite sequences, # so it doesn't attempt to shuffle the entire sequence in memory. Instead, # it maintains a buffer in which it shuffles elements). BUFFER_SIZE = 10000 dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True) dataset """ Explanation: Create training batches We used tf.data to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches. End of explanation """ # Length of the vocabulary in chars vocab_size = len(vocab) # The embedding dimension embedding_dim = 256 # Number of RNN units rnn_units = 1024 class Embedding: def __init__(self, vocab_size, embedding_dim): self._vocab_size = vocab_size self._embedding_dim = embedding_dim self._built = False def __call__(self, inputs): if not self._built: self.build(inputs) return tnp.take(self.weights, inputs, axis=0) def build(self, inputs): del inputs self.weights = tf.Variable(tnp.random.randn( self._vocab_size, self._embedding_dim).astype(np.float32)) self._built = True class GRUCell: """Builds a traditional GRU cell with dense internal transformations. Gated Recurrent Unit paper: https://arxiv.org/abs/1412.3555 """ def __init__(self, n_units, forget_bias=0.0): self._n_units = n_units self._forget_bias = forget_bias self._built = False def __call__(self, inputs): if not self._built: self.build(inputs) x, gru_state = inputs # Dense layer on the concatenation of x and h. y = tnp.dot(tnp.concatenate([x, gru_state], axis=-1), self.w1) + self.b1 # Update and reset gates. u, r = tnp.split(tf.sigmoid(y), 2, axis=-1) # Candidate. c = tnp.dot(tnp.concatenate([x, r * gru_state], axis=-1), self.w2) + self.b2 new_gru_state = u * gru_state + (1 - u) * tnp.tanh(c) return new_gru_state def build(self, inputs): # State last dimension must be n_units. assert inputs[1].shape[-1] == self._n_units # The dense layer input is the input and half of the GRU state. dense_shape = inputs[0].shape[-1] + self._n_units self.w1 = tf.Variable(tnp.random.uniform( -0.01, 0.01, (dense_shape, 2 * self._n_units)).astype(tnp.float32)) self.b1 = tf.Variable((tnp.random.randn(2 * self._n_units) * 1e-6 + self._forget_bias ).astype(tnp.float32)) self.w2 = tf.Variable(tnp.random.uniform( -0.01, 0.01, (dense_shape, self._n_units)).astype(tnp.float32)) self.b2 = tf.Variable((tnp.random.randn(self._n_units) * 1e-6).astype(tnp.float32)) self._built = True @property def weights(self): return (self.w1, self.b1, self.w2, self.b2) class GRU: def __init__(self, n_units, forget_bias=0.0, stateful=False): self._cell = GRUCell(n_units, forget_bias) self._stateful = stateful self._built = False def __call__(self, inputs): if not self._built: self.build(inputs) if self._stateful: state = self.state.read_value() else: state = self._init_state(inputs.shape[0]) inputs = tnp.transpose(inputs, (1, 0, 2)) output = tf.scan( lambda gru_state, x: self._cell((x, gru_state)), inputs, state) if self._stateful: self.state.assign(output[-1, ...]) return tnp.transpose(output, [1, 0, 2]) def _init_state(self, batch_size): return tnp.zeros([batch_size, self._cell._n_units], tnp.float32) def reset_state(self): if not self._stateful: return self.state.assign(tf.zeros_like(self.state)) def create_state(self, batch_size): self.state = tf.Variable(self._init_state(batch_size)) def build(self, inputs): s = inputs.shape[0:1] + inputs.shape[2:] shapes = (s, s[:-1] + (self._cell._n_units,)) self._cell.build([tf.TensorSpec(x, tf.float32) for x in shapes]) if self._stateful: self.create_state(inputs.shape[0]) else: self.state = () self._built = True @property def weights(self): return self._cell.weights class Dense: def __init__(self, n_units, activation=None): self._n_units = n_units self._activation = activation self._built = False def __call__(self, inputs): if not self._built: self.build(inputs) y = tnp.dot(inputs, self.w) +self.b if self._activation != None: y = self._activation(y) return y def build(self, inputs): shape_w = (inputs.shape[-1], self._n_units) lim = tnp.sqrt(6.0 / (shape_w[0] + shape_w[1])) self.w = tf.Variable(tnp.random.uniform(-lim, lim, shape_w).astype(tnp.float32)) self.b = tf.Variable((tnp.random.randn(self._n_units) * 1e-6).astype(tnp.float32)) self._built = True @property def weights(self): return (self.w, self.b) class Model: def __init__(self, vocab_size, embedding_dim, rnn_units, forget_bias=0.0, stateful=False, activation=None): self._embedding = Embedding(vocab_size, embedding_dim) self._gru = GRU(rnn_units, forget_bias=forget_bias, stateful=stateful) self._dense = Dense(vocab_size, activation=activation) self._layers = [self._embedding, self._gru, self._dense] self._built = False def __call__(self, inputs): if not self._built: self.build(inputs) xs = inputs for layer in self._layers: xs = layer(xs) return xs def build(self, inputs): self._embedding.build(inputs) self._gru.build(tf.TensorSpec(inputs.shape + (self._embedding._embedding_dim,), tf.float32)) self._dense.build(tf.TensorSpec(inputs.shape + (self._gru._cell._n_units,), tf.float32)) self._built = True @property def weights(self): return [layer.weights for layer in self._layers] @property def state(self): return self._gru.state def create_state(self, *args): self._gru.create_state(*args) def reset_state(self, *args): self._gru.reset_state(*args) model = Model( vocab_size = vocab_size, embedding_dim=embedding_dim, rnn_units=rnn_units, stateful=True) """ Explanation: Build The Model We manually implement the model from scratch, using tf.numpy and some low-level TF ops. A Model object has three layers: Embedding, GRU and Dense. Embedding and Dense are little more than just wrappers around tnp.take and tnp.dot, but we can use them to familiarize ourself with the structure of a layer. Each layer has two essential methods: build and __call__. build creates and initializes the layer's weights and state, which are things that change during the training process. __call__ is the forward function that calculates outputs given inputs, using the layer's weights and state internally. Our model (more precisely the GRU layer) is stateful, because each call of __call__ will change its internal state, affecting the next call. End of explanation """ for input_example_batch, target_example_batch in dataset.take(1): input_example_batch = tnp.asarray(input_example_batch) example_batch_predictions = model(input_example_batch) print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)") """ Explanation: For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character. Try the model Now run the model to see that it behaves as expected. First check the shape of the output: End of explanation """ example_batch_predictions[0] sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1) sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy() """ Explanation: In the above example the sequence length of the input is 100 but the model can be run on inputs of any length: To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary. Note: It is important to sample from this distribution as taking the argmax of the distribution can easily get the model stuck in a loop. Try it for the first example in the batch: End of explanation """ sampled_indices """ Explanation: This gives us, at each timestep, a prediction of the next character index: End of explanation """ print("Input: \n", repr("".join(idx2char[input_example_batch[0]]))) print() print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ]))) """ Explanation: Decode these to see the text predicted by this untrained model: End of explanation """ def one_hot(labels, n): return (labels[..., np.newaxis] == tnp.arange(n)).astype(np.float32) def loss_fn(labels, predictions): predictions = tf.nn.log_softmax(predictions) return -tnp.sum(predictions * one_hot(tnp.asarray(labels), predictions.shape[-1]), axis=-1) example_batch_loss = loss_fn(target_example_batch, example_batch_predictions) print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)") print("scalar_loss: ", tnp.mean(example_batch_loss)) """ Explanation: Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Loss function We define the loss function from scratch, using tf.nn.log_softmax. (Our definition is the same as tf.keras.losses.sparse_categorical_crossentropy.) End of explanation """ class Adam: def __init__(self, learning_rate=0.001, b1=0.9, b2=0.999, eps=1e-7): self._lr = learning_rate self._b1 = b1 self._b2 = b2 self._eps = eps self._built = False def build(self, weights): self._m = tf.nest.map_structure(lambda x: tf.Variable(tnp.zeros_like(x)), weights) self._v = tf.nest.map_structure(lambda x: tf.Variable(tnp.zeros_like(x)), weights) self._step = tf.Variable(tnp.asarray(0, np.int64)) self._built = True def _update(self, weights_var, grads, m_var, v_var): b1 = self._b1 b2 = self._b2 eps = self._eps step = tnp.asarray(self._step, np.float32) lr = self._lr weights = tnp.asarray(weights_var) m = tnp.asarray(m_var) v = tnp.asarray(v_var) m = (1 - b1) * grads + b1 * m # First moment estimate. v = (1 - b2) * (grads ** 2) + b2 * v # Second moment estimate. mhat = m / (1 - b1 ** (step + 1)) # Bias correction. vhat = v / (1 - b2 ** (step + 1)) weights_var.assign_sub((lr * mhat / (tnp.sqrt(vhat) + eps)).astype(weights.dtype)) m_var.assign(m) v_var.assign(v) def apply_gradients(self, weights, grads): if not self._built: self.build(weights) tf.nest.map_structure(lambda *args: self._update(*args), weights, grads, self._m, self._v) self._step.assign_add(1) @property def state(self): return (self._step, self._m, self._v) optimizer = Adam() """ Explanation: Optimizer Keeping the DIY spirit, we implement the Adam optimizer from scratch. End of explanation """ @tf.function def train_step(inp, target): with tf.GradientTape() as tape: # tape.watch(tf.nest.flatten(weights)) predictions = model(inp) loss = tnp.mean(loss_fn(target, predictions)) weights = model.weights grads = tape.gradient(loss, weights) optimizer.apply_gradients(weights, grads) return loss # Training step EPOCHS = 10 model.create_state(BATCH_SIZE) for epoch in range(EPOCHS): start = time.time() # initializing the hidden state at the start of every epoch model.reset_state() for (batch_n, (inp, target)) in enumerate(dataset): loss = train_step(inp, target) if batch_n % 100 == 0: template = 'Epoch {} Batch {} Loss {}' print(template.format(epoch+1, batch_n, loss)) print ('Epoch {} Loss {}'.format(epoch+1, loss)) print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start)) """ Explanation: Training loop Again, we write our training loop from scratch. To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training. End of explanation """ def generate_text(model, start_string): # Evaluation step (generating text using the learned model) # Number of characters to generate num_generate = 1000 # Converting our start string to numbers (vectorizing) input_eval = [char2idx[s] for s in start_string] input_eval = tf.expand_dims(input_eval, 0) # Empty string to store our results text_generated = [] # Low temperatures results in more predictable text. # Higher temperatures results in more surprising text. # Experiment to find the best setting. temperature = 1.0 # Here batch size == 1 model.create_state(1) for i in range(num_generate): predictions = model(input_eval) # remove the batch dimension predictions = tf.squeeze(predictions, 0) # using a categorical distribution to predict the character returned by the model predictions = predictions / temperature predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy() # We pass the predicted character as the next input to the model # along with the previous hidden state input_eval = tf.expand_dims([predicted_id], 0) text_generated.append(idx2char[predicted_id]) return (start_string + ''.join(text_generated)) print(generate_text(model, start_string=u"ROMEO: ")) """ Explanation: Generate text The following code block generates the text: It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate. Get the prediction distribution of the next character using the start string and the RNN state. Then, use a categorical distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model. The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one character. After predicting the next character, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted characters. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences. To keep this prediction step simple, use a batch size of 1. End of explanation """